GB2562502A - Visualisation system for needling - Google Patents
Visualisation system for needling Download PDFInfo
- Publication number
- GB2562502A GB2562502A GB1707869.2A GB201707869A GB2562502A GB 2562502 A GB2562502 A GB 2562502A GB 201707869 A GB201707869 A GB 201707869A GB 2562502 A GB2562502 A GB 2562502A
- Authority
- GB
- United Kingdom
- Prior art keywords
- data
- rendered
- user
- module
- headset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/0841—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating instruments
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/085—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0891—Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4209—Details of probe positioning or probe attachment to the patient by using holders, e.g. positioning frames
- A61B8/4218—Details of probe positioning or probe attachment to the patient by using holders, e.g. positioning frames characterised by articulated arms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
- A61B8/4263—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors not mounted on the probe, e.g. mounted on an external reference frame
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
- A61B8/464—Displaying means of special interest involving a plurality of displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/467—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/467—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
- A61B8/468—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means allowing annotation or message recording
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods, e.g. tourniquets
- A61B17/34—Trocars; Puncturing needles
- A61B17/3403—Needle locating or guiding means
- A61B2017/3413—Needle locating or guiding means guided by ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/368—Correlation of different images or relation of image positions in respect to the body changing the image on a display according to the operator's position
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/502—Headgear, e.g. helmet, spectacles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4416—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to combined acquisition of different diagnostic modalities, e.g. combination of ultrasound and X-ray acquisitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
- A61B8/462—Displaying means of special interest characterised by constructional features of the display
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Vascular Medicine (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Scan data representative of an interior portion of a body is received from a scanning portion S200-S202; a first set of positional and orientational data of at least one user is received and a second set in relation the scanning portion is received; rendered views are generated based on the scan, positional, and orientational data S204-S214; the rendered views are modified based on a user input and combined into a scene for display S216. The scan data may be three-dimensional ultrasound data and the system may further comprise a needle. The system may enable a surgeon to identify the position of a needle and the position of organs whilst performing a procedure. A support portion may support and move the scanning potion. The scene may be displayed on at least one headset; a rendered view may be generated for each of a plurality of headsets based on their respective positional data.
Description
(71) Applicant(s):
Medaphor Limited (Incorporated in the United Kingdom) Suite 16, Cardiff Medicentre, Heath Park, CARDIFF, CF14 4UJ, United Kingdom (72) Inventor(s):
Nicholas James Sleep
Stephen Margetts
Kathryn Louise Jenner
Dennis Llewellyn Cochlin (74) Agent and/or Address for Service:
Urquhart-Dykes & Lord LLP
UDL Intellectual Property, 7th Floor, Churchill House, 17 Churchill Way, Cardiff, CF10 2HH, United Kingdom (56) Documents Cited:
EP 3015070 A1 (58) Field of Search:
INT CL A61B, G06T Other: EPODOC, WPI
US 20160225192 A1 (54) Title of the Invention: Visualisation system for needling
Abstract Title: Visualisation system which renders views based on positional and orientational data of a scanner and a user (57) Scan data representative of an interior portion of a body is received from a scanning portion S200-S202; a first set of positional and orientational data of at least one user is received and a second set in relation the scanning portion is received; rendered views are generated based on the scan, positional, and orientational data S204-S214; the rendered views are modified based on a user input and combined into a scene for display S216. The scan data may be three-dimensional ultrasound data and the system may further comprise a needle. The system may enable a surgeon to identify the position of a needle and the position of organs whilst performing a procedure. A support portion may support and move the scanning potion. The scene may be displayed on at least one headset; a rendered view may be generated for each of a plurality of headsets based on their respective positional data.
S200 - Scanning takes place
S202 - Series of 2D frames produced
S210- 3D model data block is then transmitted to reslicer sub-module, volume rendering sub-module, the image segmentation sub-module and the indicia adder sub-module
S204 - Block of Voxels transmitted to 3D Model Constructor
S212
S206-Voxels combined with position and orientation data
S214 - rendered view I generated by image I compositor sub-module ]
S216rendered view displayed on headset
S208- 3D model data block is stored
Fig. 2a
1/17
200
σ | |
ω +-> C ω | >· |
Ε | +-J |
ω> | 76 |
Ξ5 | φ |
< | Cd |
2/17
3/17
102
Fig. lc
4/17
DO
O O CN
CO
_φ | tu _c | _φ | _φ |
Ξ5 | 4-* | Ξ5 | |
T5 O | _φ | “0 o | T5 O |
E I | Ξ5 T5 | E | E I |
_Q | O | _Q | _Q |
0 CO | E I | 0 co | 0 C/1 |
L— | _Q | L— | |
Φ Φ | 0 CO | c o | Φ “0 |
~CO Φ | DO C | 4ro 4—' | “0 ro |
L_ | ’l_ | c | φ |
o | Φ | Φ | ’(J |
4-· “0 Φ | “0 C Φ | E DO Φ | c |
4—' | L— | ||
4—' | Φ E | IS) | Φ |
E | tu 00 | _c 4—' | |
C/) | 0 | 05 | “0 |
Φ L— | o >> | E | c ro |
CN
(Z)
5 φ | _0J =3 T5 | |
φ | 00 | |
’> | 05 | o |
“0 | E | E |
Φ | • — | 1 |
L— | >* | _Q |
Φ | _Q | 0 |
“0 C | 0 | ΙΌ L— |
Φ | Φ j—· | o |
L— | ro | |
1 | L— | ’lO |
Φ | o | |
c | Q_ | |
r-i CN | Φ DO | E |
CO | o |
% | c | ||
tu | o | ||
1 | > | T5 | 4—' Φ |
LD | T5 | Φ | C/1 |
^-| | Φ | >~ | “O |
CN | L— Φ | _ra | ro Φ |
C/Ί | “0 C | Q. CO | _c |
Φ | T5 |
Φ ; CJ ; ω $
Q_
CO Γ Φ 5 ro $ 4-· >
§ Q § CN
M— § O
CO Φ
i I § CN s o
CN
CO
i o : 4—< | |||
: m— | “0 | ||
i o | Φ | ||
! | 4—' 4—' | tu | o |
! CJ | 4—' | ||
! ° | E | T5 | CJ |
! CQ 1 | C/1 C ro | O Σ | 0 L— 4—' C/1 |
i | L— 4—' | Q | C |
: O | C/1 | m | o o |
: CN | n 1 |
Fig. 2a
5/17
— | ro 4-· | 1/) Φ | M— O | (J |
I o | c Φ | 4ro L— | ||
CN | E | Φ | t/Ί | ra |
1 | OJD | c | 4—' | |
CN | φ | Φ | tu | ro |
CO | 1/) | OJD | > T5 |
tu T5 | |
O | |
E | |
Q | |
m | |
E | |
o | |
L— | |
M— | |
1/) | |
Φ | |
c | u |
_ro | _o |
Q_ | _Q |
Φ | ro |
u | 4—' |
ro | |
ΪΌ | Ό |
Fig. 2b
6/17
co ώ ’ll
7/17
06 17
Π3 m
8/17 φ co m
346b 346c
_Q co ώ ’ll
9/17
S400-Scanning takes place ; S410
_Q | C o | |||
CH | 4—' | J | ||
L— | co | |||
Φ | c | |||
tu | ||||
ΪΌ | E | |||
tu | 00 | |||
Φ | ||||
t | CH | |||
o L— M— | tu 00 Π5 | φ Ϊ | ||
Ϊ | ||||
4-· | E | Ό ? O ’ | c o | |
Q_ 4-· | gT | E ϊ | T5 | |
_Q * | Φ | |||
o | >~ | |||
CH | T5 | σι ; | co | |
tu | o | CO ? | Q_ | |
c | E | Ό * | CH | |
1q E o | _Q CH | T5 ; .E * | ew d | 4-> Φ CH |
(J | 00 | CL? ϊ | ’> | T5 |
L— | c | ΓΠ | ||
o | Z5 j | T5 | Φ | |
4—1 | tu | T5 Ϊ | Φ | _c |
CH O Q_ | T5 c tu | O J E ϊ i ; | tu T5 | |
E | L— | _Q ; | Φ | |
o | tu | ?·· | L— | |
(J | E | ω ; .........;x, | 1 | |
tu | ||||
00 co E | o > | t—1 CO | ||
tu | ? | |||
— | ||||
CN | T5 | |||
t—1 | O | J | ||
CH | £ |
ώ ’ll
T5 | ontro | |
Φ | Φ | |
4—' CO | i o | |
L— | 4—' | |
tu | ; T5 | |
c | : Φ | |
Φ | 4—' | |
00 | 4—' | |
i E | ||
(J | CH | |
o | : C | |
_Q | : CO L_ | _φ |
CO | 4—' | |
4—' | ................... N-: in | ΤΊ |
CO | n | |
T5 | in Φ | E |
Φ | : X | |
T5 | o | |
O | : > | |
E | : M— o | |
Q | : | |
m | Φ | |
1 | o | |
CN O | : CQ | |
: 1 | ||
CO | i | |
\ S40 |
and the indicia adder sub-module
10/17
— | ro 4-· | (/) Φ | M— O | (J |
I <_) | c Φ | 4ro L— | ||
o | E | Φ | t/Ί | ra |
t—I | OJD | c | L·- 4—' | |
φ | Φ | tu | ro | |
<Z) | 1/) | OJD | > T5 |
00 | |
c ’l_ | |
Φ | (/) |
Ό | Φ |
C | (/) |
Φ | |
L— | (J |
Φ | kD |
E | o 1 |
_φ | |
o > | |
I CQ | Ό O E |
O | 1 |
1 | _Q |
CZ) | (/) |
Φ C | _φ _Q |
4-· | ro |
o | |
L— | (/) |
OJD | ro |
C | 4—' ro |
’l_ | Ό |
Φ Ό C | Φ 4—' ro |
Φ | L— |
L— | Φ |
Φ | c |
OJD | Φ |
ro | OJD |
E | O |
tu T5 | |
O | |
E | |
Q | |
m | |
E | |
o | |
L— | |
M— | |
(/) | |
Φ | |
c | u |
_ro | _o |
Q_ | _Q |
Φ | ro |
u | 4—' |
ro | |
(7) | Ό |
Fig. 4b
11/17
102
Fig. 5
12/17
650 <0 ώ
LL
13/17
ο
Fig. 7
14/17
102a
A ft
/ //’ | —o | |||
ΐ : | i | M, cxi % fa < S A | ||
/ | |7l Zli' |< 1 | |||
z | ZST Ki®<< A i | |||
/ / A: :W: / | ||||
& | / / / /. / | |||
ώ ’ll
15/17
904
σ> ώ ’ll (XI Ο σ>
16/17
S1000- Control module issues a S1006 - line extraction request for position of needle ;* applied to generate to be tracked possible paths
Fig. 10
T5 | ||
Φ Q_ E ro | Φ | |
1/) | ||
1 | ro | |
Φ F | 1/) | |
(J | ||
4-· | o | |
_Q | ||
1/) | ro | |
o | ro | |
> | T5 | |
Φ | ||
L— | Φ | T5 |
Q_ | T5 | Φ |
CN | O | > |
o | E | _φ |
o 1 | Q | L— 4—* Φ |
CO | m | L— |
ω 00 c | Φ T5 O E | |
T5 | Q | |
O | m | |
_c | Ό | |
1/) Φ | Φ > | |
_C | _φ | |
4-· | L— | |
4—' | ||
ro | Φ | |
(j | 1/) | |
_o | o 4—' | |
1 | T5 | _o |
Φ | _Q | |
o | — | |
o | a. | 4—' |
t—1 | Q_ | Φ |
CO | ro | T5 |
17/17
1102
Application No. GB1707869.2
RTM
Date :26 October 2017
Intellectual
Property Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
Oculus Rift (Page 12)
HTC Vive (Page 12)
HoloLens (Page 12)
HDMI (Page 13)
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
-1VISUALISATION SYSTEM FOR NEEDLING
FIELD
The invention relates to a system and apparatus. Particularly, but not exclusively, the invention relates to a visualisation system and apparatus for needling.
BACKGROUND
When carrying out a surgical procedure, such as needling, a user of a needling apparatus typically needs to locate a target within the body, for example, a particular tissue structure, growth or bone. If the target is located within the body it is very often the case that the surgeon is unable to see the target which makes the procedure inevitably more complicated. Current guidance is that “blind” procedures are unsafe and should not be undertaken.
One of the techniques that is used to provide real-time feedback during invasive surgical procedures is ultrasound. Ultrasound guided interventional procedures include biopsies, the placement of drains, aspirations, and peripheral nerve blocks.
Ultrasound guided needling is commonly performed either free-hand (using an unconstrained needle) or using a needle guide on the side of the ultrasound transducer. Needle guides do not aid the majority of procedures.
Free-hand needling using ultrasound is difficult to master as it involves manually keeping the needle tip in the same narrow two dimensional plane as the ultrasound beam whilst at the same time advancing the needle towards the target.
Augmented reality has previously been used in visualisation systems for use in ultrasound guided needling ( w ww.nebi.nlm.nih.gov/m/p»bmed/15458132).
-2Needle guidance has also been considered in Sheng Xu et al in https://www.researchgate.net/pubiication/228446767 3D ultrasound guidance system for need:e piacemeet procedures ert no 6918DH
Needle tracking is also considered in US20070073155A1
Aspects and embodiments were conceived with the foregoing in mind.
SUMMARY
Systems in accordance with aspects may be used in needling procedures or other medical procedures.
Where positional or orientational data are described individually, it is intended to mean both positional and orientational data.
Viewed from a first aspect there is provided, a visualisation system configured to:
receive scan data from a scanning portion representative of an interior portion of a body; receive a first set of positional and orientational data indicative of the positions of at least one user; receive a second set of positional and orientational data indicative of the position of the scanning portion; generate one or more rendered views of the interior portion using the scan data and the first and second sets of positional and orientational data; receive input from the at least one user indicative of a change in their viewpoint or a desire to modify one or more features of the rendered views; modify the rendered views responsive to the user input; and combine the rendered views into a scene for display by the system.
The scan data may be generated by an ultrasound machine with an ultrasound transducer.
The term body means a material object such as, for example, a limb of a patient, a body of a patient, an animal, a part of a manikin, a part of a phantom, a manikin, a phantom or a lump of meat or other material which may be used to simulate a scanning procedure.
-3Input may be in the form of a passive user input such as, for example, the movement of a headset which the user is using to view the scene, or an active user input such as, for example, a voice command or an input detected through the motion sensing capability of a headset. User input may also be in the form of a button press on a keyboard or a gesture or touch-based input on a touch-screen.
The rendered views in the scene may be generated from the perspective of a user and overlaid into their real world view of the medical procedure. The rendered views may be modified in response to input comprising positional and orientational data representing movement of a user.
The system may be further configured to display a rendered view of the interior portion of the body.
The generation of a rendered view may comprise generating a three-dimensional model representative of the interior of the body. The generation of a rendered view may further comprise applying image processing to the three-dimensional model to extract at least one view through the three-dimensional model. The generation of a rendered view may further comprise augmenting the three-dimensional model with image enhancement to enhance features of the three-dimensional model. The generation of a rendered view may further comprise augmenting the three-dimensional model with added indicia to indicate the location of features of the three-dimensional model.
The system may be further configured to receive positional data indicative of the position of a scanning probe configured to transmit the scan data to the system and generate the rendered view of the interior position using the scan data and the positional data.
Any of the positional data may be received contemporaneously with the scan data.
The scan data may consist of a three-dimensional data set of ultrasound data which is changing in real-time (sometimes known as “4d ultrasound”).
-4Alternatively, the scan data may comprise a plurality of two-dimensional frames from a scanning portion and positional and orientational data indicative of the position and orientation of the scanning portion.
The system may be further configured to receive user input to modify any of the rendered views in the scene.
A rendered view may be generated in real-time and co-aligned with the beam from the ultrasound transducer such that the structures seen in the view overlay the anatomy within the body which they image.
The system may further comprise a support portion arranged to support a scanning portion arranged to generate the scan data.
The effect of this is that the user of the visualisation system does not need to hold a scanning portion. This leaves both hands free to carry out the procedure.
Furthermore, the system can be configured to automatically align the transducer with an object of interest without operator intervention.
The support portion may be arranged to move the scanning portion to generate the scan data.
The support portion may be arranged to generate positional data indicative of the position of the scanning portion.
The support portion may be arranged to receive positional data identifying a specified scan plane. The support portion may be arranged to move the scanning portion into the specified scan plane for the generation of scan data.
The support portion may be arranged to periodically move the scanning portion to repeatedly scan over a region of interest.
-5The support portion may be arranged to be rested on a body such that it remains in a given location until removed by the operator, alternatively it may be held in place by external means such as a support arm.
The system may further comprise a needle to enable procedures such as needle biopsies, placement of drains, peripheral nerve blocks and needling procedures on internal organs such as the kidney and liver to be carried out.
The modification of a rendered view may be the addition of an image enhancement to enhance a feature in the interior of the body.
The modification of a rendered view may be the addition of indicia, for example, to identify a feature in the interior of the body.
The indicia may comprise a text portion displaying information relating to a target location. The indicia may comprises a text portion displaying external data relating to the body. The indicia may comprise a geometric shape.
An alternative rendered view may contain a volumetric rendering of the three-dimensional model or to provide a cut-away view of the three-dimensional model.
An alternative rendered view may comprise a plurality of rendered planes which may comprise a plane co-incident with the ultrasound beam, a plane coincident with an object of interest, such as, for example, a needle, and a plane which is at a configurable angle to the plane coincident with the object of interest.
Rendered views may be composited into a scene and displayed on one or more headsets, which may each be an augmented reality headset.
Where augmented reality headsets are described, the term is intended to mean both augmented reality solutions where computer generated graphics are overlaid onto a user’s field of view, and virtual/mixed reality solutions, where the view from a camera is
-6composited with the computer generated graphics to generate a view that includes a computer generated version of what the user would ordinarily see.
The system may be further configured to receive positional data from any headset of the at least one headsets; and generate a view of the scene appropriate to the current location and orientation of each respective headset.
The effect of this is that a plurality of users may each wear their own headset and the scene will be generated from the perspective of that user. Any indicia or image enhancement generated by one user may then form part of the view seen by each of the other users.
The system may be further configured to receive user input from any of the plurality of headsets, the user input indicating a modification to a rendered view; and modify the rendered view for the respective headset responsive to the user input from the respective headset.
The effect of this is that each user may make their own modifications to the scene. This means that a team may perform a procedure using the visualisation system and each member of the team may generate rendered views according to their own needs and preferences.
Each rendered view can be overlaid onto the real-world in a configurable location in the augmented reality scene.
A rendered view may be generated based on positional data for another user. This means that such a view is generated from the perspective of a different user and that view may be displayed for other users to see.
The system may be configured to display additional rendered views displaying data related to the body are combined into the scene. The additional rendered views may be generated by the system without the use of positional and orientational data from either a user or a scanning portion.
-7The system may be configured to display rendered views comprising one or more statically located views of configurable size.
The system may be configured to display a rendered view which is generated using positional and orientational data indicative of the position and orientation of another user.
A rendered view may be a display of data relating to the body, which may have been previously generated before a procedure using the system is carried out.
A rendered view may be a dimensionally different view of another rendered view statically located in a different position in the scene (a “billboard” view).
This removes the necessity to study monitors to obtain the information about the interior portion of the body.
The modification of the scene may comprise the removal of a rendered view.
The user may see different rendered views dependent on where they look in the scene.
The input from the user may identify a target for insertion of a needle. The user input may be voice input, gaze input, keyboard input, or it may be a gesture which is detected by motion detection.
The system may be configured to, responsive to receiving the input identifying a target for insertion of a needle, overlay a marker onto the target.
The effect of this is that the user may add markers to rendered views.
The system may further configured to generate a path identifier indicating a path from the exterior of the body to the target for insertion of the needle.
The path identifier may be a line or other marking which follows the path from the exterior of the body to the target.
-8The path identifier may be generated to be orthogonal or at a configurable angle to a scan plane of a scanning portion.
Alternatively, the system may receive user input indicating a path the user wishes to take when inserting the needle.
The system may be further configured to track objects of interest, such as a needle, by using image processing techniques. These techniques may use data from multiple time points of the three-dimensional data set of voxels. This temporal comparison combined with spatial feature tracking may reduce the search space required to identify objects of interest and may also reduce positional error of the said tracked objects.
The system may be configured to:
receive a request to track an item;
retrieve a plurality of chronologically sequenced three-dimensional data blocks, each data block corresponding to an instance in time at which the three-dimensional data block is generated;
determine the correlation of the plurality of chronologically sequenced threedimensional data blocks to extract the most likely track of the item from the possible tracks.
The effect of this is that time-series based techniques can be used to reduce the search space needed to track the item of interest. This reduces the computational power required to locate the item of interest. It may also improve location accuracy.
The system may be configured to use image enhancement to display the most likely location of the item in the rendered view.
The system may be configured to use image enhancement to display the most likely track of the item on the rendered view.
-9The track may be emphasised using image enhancement to emphasise the track of the item.
The generation of a rendered view may comprise the de-emphasis of at least part of the three-dimensional model data block to generate a cut-away view. The effect of this is that data of interest will be emphasised and that data which is not of interest will be deemphasised from the rendered view.
The scene may comprise a rendered view comprising the de-emphasised portion of at least part of the scan data to generate a cut-away view.
The scene may comprises a rendered view comprising the emphasis of at least part of the scan data.
The scene may comprises a rendered view comprising a plurality of planes which may be selected by a user using user input.
The plurality of planes may also comprise a plane coincident with an object of interest.
The plurality of planes may also comprises a plane at a configurable angle to a plane coincident with an object of interest.
DESCRIPTION
First and second embodiments in accordance with the first aspect will now be described, by way of example only, and with reference to the following drawings in which:
Figure la schematically illustrates a visualisation system in accordance with the embodiment;
Figure lb schematically illustrates a control module 106 in accordance with the embodiment;
Figure lc schematically illustrates a transducer gathering data from a volume;
-10Figure 2a is a flow diagram illustrating the steps undertaken by the visualisation system to generate a rendered view from two dimensional data;
Figure 2b illustrates a set of sub-processes that are performed as part of the generation of a rendered view;
Figure 3a schematically illustrates a scene overlaid into the field of view of a headset;
Figure 3b schematically illustrates a team of users each wearing a headset to communicate with the system in accordance with the embodiments;
Figure 4a is a flow diagram illustrating the steps undertaken by the visualisation system to generate a rendered view from three dimensional data;
Figure 4b illustrates a set of sub-processes that are performed as part of the generation of a rendered view;
Figure 5 is an illustration of a rendered view which identifies a track for the insertion of a needle;
Figure 6 is an illustration of a rendered view on which a needle is identified;
Figure 7 is an illustration of a rendered view in which a region of interest is identified;
Figure 8 is an illustration of the use of multiple scan planes in a rendered view; and
Figure 9 is an illustration of a scene containing two example rendered views that may be provided by the system in accordance with the embodiment;
Figure 10 is an illustration of the application of time-series to the tracking of an object; and
-11Figures 11a and lib are an illustration of a probe cradle which may be used with a system in accordance with the embodiment.
We now illustrate, with reference to Figures 1 to lib, a visualisation system 100 in accordance with the first and second embodiments.
The visualisation system 100 is described as part of a needling system 200 for reasons of illustration only.
Needling system 200 comprises an ultrasound transducer 102 configured to generate scan data, an ultrasound machine 103 configured to receive the scan data from the ultrasound transducer 102 and to generate an output stream of ultrasound scan data, a control module 106 which is configured to receive the output stream of ultrasound scan data from the ultrasound transducer via a control interface 108. The needling system 200 further comprises a needle 110 which may be used in a needling procedure.
The tracking module 107 is configured to receive location and orientation data from at least one of the ultrasound transducer 102, and the headset 104. The tracking module 107 uses a suitable tracking system 105, such as magnetic tracking or optical tracking, to process the location and orientation data. The tracking may be performed by a tracking device built into the device being tracked or by an external tracker such as an Ascension TrakSTAR.
The location and orientation data received from the ultrasound transducer 102 indicates the location and orientation of the ultrasound transducer 102. The location and orientation data received from the headset 104 indicates the location and orientation of the headset 104.
The tracking module 107 is configured to use the location and orientation data transmitted from the ultrasound transducer 102 and the headset 104 to determine the position and orientation of the ultrasound transducer 102 and the headset 104. The tracking module 107 is configured to transmit the determined position and orientation of the ultrasound transducer 102 and the headset 104 to the other sub-modules in the control module 106 as described below.
-12The tracking module 107 is configured to process the location and orientation data in realtime, i.e. as it is received from the respective components, to generate position and orientation data.
The visualisation system 100 further comprises at least one augmented reality headset 104 which communicates with the visualisation system 100 using a headset interface 130. An example of a suitable headset 104 would be the Oculus Rift, the HTC Vive or the Microsoft Holo Lens.
The use of an augmented reality headset 104 in this way means that the visualisation system 100 generates an augmented reality scene in the working environment which is used by users of the visualisation system 100.
We will describe later in this description how the rich range of user interface features of an augmented reality headset 104 can enable each of the users of the visualisation system 200 to provide instructions to the visualisation system 100 to augment their reality as part of the use of the visualisation system 100 to perform a needling procedure using the needling system 200.
The visualisation system 100 may further comprise additional augmented reality headsets which are being used by a plurality of users. Each of the augmented reality headsets are arranged to provide individual input to the visualisation system in accordance with what is described below. The following description is applicable to interaction of any of the additional headsets being used with the visualisation system 200.
Indeed, a plurality of the augmented reality headsets 104 may interact with the visualisation system simultaneously. The tracking module 107 allows the scene displayed by each headset to be generated using the positional and orientational data for that user.
Control module 106 comprises a three-dimensional (3D) model constructor sub-module 106a, a reslicer sub-module 106b, a volume rendering sub-module 106c, an image
-13segmentation module 106d, an image compositor module 106e and a tracking module 107. Each of these sub-modules and the tracking module 107 are configured to transmit data between one another using standard data interfacing techniques.
We will now describe how a needling system 200 in accordance with a first embodiment is used to generate a scene containing a rendered view of an interior portion of an arm 300 of a patient during a needling procedure conducted using a tracked ultrasound transducer 102. This is described with reference to Figure 2a.
A user of the needling system 200 uses the transducer 102 in a step S200 to scan the arm 300 by moving the transducer 102 along the arm 300 to generate a scan data. This is illustrated in Figure lc where the scan plane of the transducer 102 is enumerated by the reference numeral 170.
The ultrasound machine 103 interfaces with the transducer 102 to produce a series of twodimensional frames of scan data in a step S202. Coincidentally, the frames of scan data are displayed on a monitor on the ultrasound machine 103.
The two dimensional frames of scan data are also transmitted to an external monitor port on the ultrasound machine 103. Examples of suitable external monitor ports are high definition multiple interface (HDMI), video graphics array (VGA) and digital video interface (DVI).
In step S204, the two-dimensional frames of scan data are transmitted to the control module 106 as a block of voxels via the ultrasound data interface 108, which associates a time-stamp with the frame as it is received. This time-stamped block of voxels is then transmitted by the control module 106 to the 3D model constructor sub-module 106a. The ultrasound data interface 108 may be a video capture card which provides an input data stream to the volume constructor sub-module 106a.
Contemporaneously, the location and orientation data for the transducer 102 is received by the tracking system 105. The tracking module 107 uses the location and orientation data received by the tracking system 105 to generate position and orientation data for the
-14transducer 102, also associating a time-stamp with the data as it is received. The position, orientation and time-stamp data is fed to the volume construction module 106a.
The volume constructor module 106a associates the two-dimensional frames from the ultrasound data interface 108 with their corresponding location and orientation data from the tracking module 107 using the time-stamps.
In a step S206 the relicer module 106b creates a Cartesian representation of the body as a 3D-model, by raster scanning the reslice plane along the z-axis, and storing the resultant reslice frames as a 3D array of voxels. Where multiple frames are within a configurable distance from a voxel, and could therefore be used to contribute to it, the most recent frame is used. The control module 106 is configured to delete frames that are older than a preconfigurable age.
The 3D construction sub-module 106a is configured to store the generated 3D model data block in a format which is suitable for further processing by the reslicer sub-module 106b, the volume rendering module 106c, the image segmentation sub-module 106d and the indicia adder 106f in a step S208.
The 3D model constructor sub-module 106a then transmits the 3D model data block in the stored format to each of the reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f in a step S210 immediately after the 3D model data block has been stored, i.e. in real-time.
As set out below, the relicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f may be used to modify rendered views based on input by the user.
The reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f then call specific image processing routines to perform the operations on the 3D model data block that will now be described. The operations are performed independently by the respective sub-module
-15on the 3D model data block in a step S212 as four independent sub-processes which we enumerate as S212A, S212B, S212C and S212D which are illustrated in Figure 2b.
In step S212A, the reslicer sub-module 106b calls an image reslicing routine such as nearest neighbour reslicing as described by https://ca.mtools.cam.ac.uk/accessfeGnteBt/group/d4fe68(}()-4ce2-4bad-8041- to generate one or more twodimensional slice planes from the 3D model data block. As is set out below, the one or more two-dimensional slice planes may be coincident with an object of interest or at a configurable angle to that plane. The one or more two dimensional slice planes may also be requested for use by the volume constructor module 106a.
The reslicer sub-module 106b outputs the data corresponding to the generated twodimensional slice planes from the 3D model data block. The data is transmitted to the image compositor sub-module 106e.
In step S212B, the volume rendering sub-module 106c is configured to call an image rendering routine such as the routine described in http://www.h3dapi.org/modules/mediawiki/index.php/MedX3D. The data is then transmitted to the image compositor sub-module 106e.
In step S212C, the image segmentation sub-module 106d calls an image enhancement routine which receives the 3D model data block as an input. The image enhancement routine may apply thresholding, clustering or compression-based methods to the received volumetric data block to generate a segmented 3D model data block. The image enhancement routine may also apply the image segmentation methods such as those described in https://www.ncbi.ri to generate a segmented 3D model data block.
The image segmentation sub-module 106d may also be configured to track objects of interest such as a needle and to pass the position and orientation of said objects to the reslicer module
-16106b to allow automatic update of rendered scan planes based on the position of objects within the body.
The segmented 3D model data block generated by the image rendering sub-module 106d is then transmitted to the image compositor sub-module 106e.
In step S212D, the indicia adder sub-module 106f is configured to generate and/or retrieve additional information which may be of use to the user. Such additional information may comprise other data from other data services and/or other visual information which may be of use to the user such as marker data which is then transmitted to the image compositor submodule 106e.
The data which has been transmitted from the image reslicer sub-module 106b, the volume renderer sub-module 106c, the image segmenter sub-module 106d and the indicia adder submodule 106f to the image compositor sub-module 106e is then combined in a step S214 to generate rendered views which are then further composited to form the scene.
The combination in step S214 generates the rendered views and the scene using well known compositing techniques such as depth buffering, alpha blending and techniques used in industry-standard 3d model-based game engines such as Unity 5.6 (https://unity3d.com/) and Unreal4 (https://www.unrealengine.com).
The image compositor sub-module 106e transmits the scene data to the augmented reality headset 104 in a step S216. The scene is then displayed for the user of the needling system 200 in real-time by the headset 104.
The control module 106 then loops back to step S200 where the next collection of scan data is received from the ultrasound machine 103 so that the real-time generation of the scene through steps S200 to S216 can be maintained.
-17We will now describe how a needling system 200 in accordance with a second embodiment is used to generate a rendered view of an interior portion of an arm 300 of a patient during a needling procedure using an ultrasound transducer 102. This is described with reference to Figure 4a.
A user of the needling system 200 uses the transducer 102 in a step S400 to scan the arm 300 by placing the transducer 102 on the arm 300 and using the ultrasound machine 103 to generate a three-dimensional block of scan data. As in the previous embodiment, this is illustrated in Figure lc.
The ultrasound machine 103 interfaces with the transducer 102 to generate a 3D model data block representing the body in a step S402. The 3D model data block is then transmitted to the control module 106 via the ultrasound data interface 108 using a standard data bus. The 3D model data block is formed from a block of voxels.
Contemporaneously, the location and orientation data for the transducer 102 is received by the tracking system 105. The tracking module 107 uses the location and orientation data received by the tracking system 105 to generate position and orientation data for the transducer 102. The position and orientation data is fed to the image compositor sub-module 106f.
The position and orientation data is only required in this embodiment for the purposes of orienting the 3D model data block and not for the construction of the 3D model data block.
The block of voxel data is then transmitted to the control module 106 via the control interface 108 in a step S404 where the block of voxels is transmitted to the 3D model constructor submodule 106a via the ultrasound data interface 108 as the block of voxels is received by the control module 106, i.e. in real time. The control module 106 assigns a timestamp with the block of voxel data as it is received and may discard data older than a pre-configurable age.
Responsive to receiving the 3D model data block through the ultrasound data interface 108, the volume construction module 106a is configured to store the 3D model data block in a
-18format which is suitable for further processing by the reslicer module 106b, the volume rendering module 106c, the image segmentation module 106d and the indicia adder submodule 106f in a step S408. The timestamp assigned to the block of voxel data is also stored.
The 3D model constructor sub-module 106a then transmits (in parallel) the timestamped 3D model data block in the stored format to each of the reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f in a step S410 immediately after it has been stored.
As set out below, the relicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f may be used to modify any of the rendered views generated by the system 100 based on input from a user.
The reslicer sub-module 106b, the volume rendering sub-module 106c, the image segmentation sub-module 106d and the indicia adder sub-module 106f then call specific image processing routines to perform the operations on the 3D model data block that will now be described. The operations are performed independently by the respective sub-module on the 3D model data block in a step S412 as four independent sub-processes which we enumerate as S412A, S412B, S412C and S412D which are illustrated in Figure 4b.
In a step S412A, the reslicer sub-module 106b calls an image slicing routine such as nearest neighbour reslicing routine as described by https://camtools.cam.ac.uk/access/content/group/d4fe6800-4ce2.-4bad-8041- to generate one or more twodimensional slice planes from the 3D model data block. As in the previous embodiment, the one or more two-dimensional slice planes may be requested by each user and may be planes that are coincident with an object of interest or transverse to that plane. The two-dimensional slice planes may also be requested by the volume constructor module 106a
The reslicer sub-module 106b outputs the data corresponding to the generated twodimensional slice planes from the 3D model data block. The data is transmitted to the image compositor sub-module 106e.
-19In a step S412B, the volume rendering sub-module 106c is configured to call an image rendering routine such as the routine described in
The data is then transmitted to the image compositor sub-module 106e
In a step S412C, the image segmentation sub-module 106d calls an image enhancement routine which receives the 3D model data block as an input. The image enhancement routine may apply thresholding, clustering or compression-based methods to the received volumetric data block to generate a segmented 3D volumetric data block. The image enhancement routine may also apply the image segmentation method such as those described
548934 to generate a segmented 3D volumetric data block.
The segmented 3D model data block generated by the image rendering sub-module 106d is then transmitted to the image compositor sub-module 106e
In a step S412D, the indicia adder sub-module 106f is configured to generate and retrieve additional information which may be of use to the user. Such additional information may comprise other data from other data services and/or other visual information which may be of use to the user such as marker data which is then transmitted to the image compositor submodule 106e.
The data which has been transmitted from the image reslicer sub-module 106b, the volume renderer sub-module 106c, the image segmenter sub-module 106d and the indicia adder submodule 106f to the image compositor sub-module 106e is then combined in a step S414 to generate a rendered view.
The data which has been transmitted from the image reslicer sub-module 106b, the volume renderer sub-module 106c, the image segmenter sub-module 106d and the indicia adder submodule 106f to the image compositor sub-module 106e is then combined in a step S414 to generate rendered views which are then further composited to form the scene.
-20The combination in step S414 generates the rendered views and the scene using well known compositing techniques such as depth buffering, alpha blending and techniques used in industry-standard 3d model-based game engines such as Unity 5.6 (https://unity3d.com/) and Unreal4 (https://w w.unrealengine.com).
The image compositor sub-module 106e transmits the scene data to the augmented reality headset 104 in a step S416. The scene is then displayed for the user of the needling system 200 in real-time by the headset 104.
The control module 106 then loops back to step S400 where the next collection of scan data is received from the ultrasound machine 103 so that the real-time generation of the scene through steps S400 to S416 can be maintained.
An example of a scene generated by either the first embodiment (Figure 2a) or the second embodiment (Figure 4a) is illustrated in Figure 3 as part of a display which is overlaid onto the field of view of the user of the headset 104 as part of the augmented reality environment which is generated by the visualisation system 100 using the headset 104. Figure 3 illustrates the rendered view from the perspective of the headset 104 looking onto an arm 300 of a patient whilst an ultrasound transducer 102 (with scan plane 320) is being used to generate scan data for the arm 300. The user is looking for vessel 306 using the ultrasound transducer 102 and the overlay of the rendered view into the field of view of the headset enables the vessel 306 to be visualised whilst the procedure is being carried out with the user looking at the arm 300.
That is to say, whilst the user is wearing the headset 104 and looking at the arm 300 to carry out the needling procedure using the needling system 200, a rendered view of the interior portion of the arm 300 is superimposed onto the user’s view of the real-world surroundings whilst the user looks at the arm 300.
The rendered view is in 3D and the rendered view provides the user with a view of the internal organs of the arm 300 in 3D, optionally with certain structures highlighted by the
-21Image Segmentation Module 106d. Each of the users is provided with their own rendered view of arm 300 using positional and orientational data obtained from each headset.
It may be that the user wants to only perform a procedure on vessel 306 and does not want to collide with vessels 310 and 312. The visualisation of vessels 310 and 312 assists the user in identifying vessels 310 and 312 which increases the chances that the user will avoid these vessels during the procedure.
The effect of displaying the rendered view on the headset 104 in real-time, i.e. as it is generated, is that the rendered view is representative of the arm 300 at that point in time and provides the user with a constantly updated image of the interior of the arm.
This means that the system 100 can provide an updated view of the interior of the arm shown by an ultrasound image, which would, for example, display the progress of a needle through the arm as it is inserted during a needling procedure.
In providing the rendered view in the same field of view as of the procedure, the user does not have to maintain watch over a remotely situated monitor on the ultrasound machine 103. This has ergonomic benefits as the user does not have to keep turning their head to view the monitor on the ultrasound machine 103.
By co-locating a rendered view with the anatomical structures it is imaging, the operator gains further ergonomic benefits as it becomes more intuitive as to where to insert the needle to obtain the trajectory required. This may reduce the training need and improve patient safety.
As will be appreciated from Figure 3, the benefits of visualisation system 100 can be realised without the need to be using the system during a needling procedure.
The first embodiment described with respect to Figure 2a and the second embodiment described with respect to Figure 4a may additionally comprise the following optional features.
-22The embodiment described with respect to Figure 2a and the embodiment described with respect to Figure 4a may also be described in the same way with respect to any of a plurality of augmented reality headsets 104.
That is to say, a rendered view may be generated for any of the augmented reality headsets 104 using positional and orientation data transmitted from the respective headset to the tracking module 107. The effect of this is that if a team of users is using the visualisation system 100 to perform a needling procedure, the rendered view is generated by the visualisation system 100 and overlaid into the real-world view of the situation from the perspective of each user.
For example, if the arrangement in Figure 3b is considered, we have a patient laying on a bed with a team of users working around the patient 340.
The visualisation system 100 generates an augmented reality environment which overlays computer-generated imagery into the real-world environment surrounding the users and the patient.
The effect of this is that multiple sets of imagery (enumerated as 346a, 346b, 346c, 346d and 346e) can be augmented into the real-world within the augmented reality environment generated by the visualisation system 100 and the headsets (enumerated as 104a, 104b, 104c and 104d) being worn by the users (342a, 342b, 342c, 342d). Naturally, each of the users will have a different perspective on the arm 300 of the user. The positional and orientational data transmitted by the respective headsets to the tracking module 107 means that the visualisation system 100 generates a rendered view for each user that is overlaid into that user’s real world view of the environment through the respective headset. That is to say, when the user is looking down at the patient during the procedure, the rendered view provided in steps S216 or S416 is overlaid into that user’s real-world view to enable them to see a rendered view of the ultrasound imagery overlaid onto the arm 300 of the user.
-23The image data to be composited into the scene by the Image Compositing module 106e is selected from the data from sub-modules 106b, c, d and f based on user input and configuration data. The data from a particular sub-module may be displayed differently in different rendered views. Where required, the sub-modules may process their input data more than once for a particular 3D data block with different configuration parameters.
Optionally, the visualisation system 100 may render more than one rendered view into a user’s scene by performing steps S200-S214 or steps S400-S414 to produce each rendered view and then using the Image Compositing module 106e to further composite these rendered views in to the users’ scene.
An example of this is creating a rendered view from the perspective of another user (a second user) of the system who is wearing another headset 104. The second user may be viewing the procedure at a different angle relative to the patient. The first user may want to see the rendered view that the second user is being provided with.
Other examples include rendered views of data relating to the body. More than one such rendered view may be generated, each containing different subsets of the data or data relating to different parts of the body. Examples of data from the body include CT, MRI, X-Ray, previous ultrasound data, data from the patient’s notes, histology results, contemporaneous results from other patient monitoring systems.
As shown in Figure 8, a rendered view may be shown co-aligned with the structures that it images within the body.
Additionally, as shown in Figure 3, a rendered view may be shown so that it is co-aligned to the end of the ultrasound transducer 102, tracking this as it is moved.
Rendered views may also be displayed as statically located “billboard” displays placed within the augmented reality scene by the visualisation system 100. Billboard views may be of different sizes. Billboard views are illustrated by Figure 9, which shows an example of a
-24billboard view 904 relative to a rendered view 902 which shows a collection of billboard views arranged in the augmented reality scene.
Optionally, the needling system 200 may further comprise a probe cradle 120 configured to hold the ultrasound transducer 102 in position at the appropriate position on the patient. This means that the user of the needling system 200 does not need to use one of their hands to hold the ultrasound transducer 102 during the needling process. This leaves both hands free to carry out the procedure. The probe cradle is illustrated with reference to Figures 11a and lib.
As shown in Figure 11a, the probe cradle 120 may comprise a jointed arm 1102 coupled to the probe 102 by an attachment device comprising a collar 1108 around the probe 102. Optionally, the collar 1108 may be moved by motors 1105 between the individual joints of the jointed arm 1102 allowing the position and orientation of the probe 102 to be controlled in up to six degrees of freedom. Encoders may be attached between the individual joints of the arm 1102 and the motors 1105. Data from the encoders can be used to infer positional and orientational information of the probe 102 by reference to the position of the joints.
Alternatively, as shown in Figure lib, the probe cradle 120 may be built into a cuff 1106. The probe cradle 120 may comprise arms 1110 coupled to an attachment device comprising a collar 1108 around the probe 102. Optionally, the collar 1108 may be moved by motors 1105 coupled to the arms 1110 to allowing the orientation of the probe 102 to be controlled in up to three axes. Encoders may be attached between the arms 1110 and the motors 1105. Data from the encoders can be used to infer positional and orientational information of the probe 102 by reference to the position of the arms 1110.
Using encoders as part of a probe cradle 120 eliminates the need to affix a tracking device to the ultrasound transducer 102.
For example, the probe cradle 120 may be configured to receive the data relating to the plane containing the needle from the image reslicer sub-module 106d and use that data to move
-25the ultrasound transducer 102 into an orientation in which a scan plane from the ultrasound transducer 102 coincides with the longitudinal plane of the needle 110. The ultrasound transducer 102 may then be held by the probe cradle 120 in that position.
Alternatively, the requested positional and orientational data may be time dependent, to cause the cradle to sweep the ultrasound transducer 102 through a volume of the body to allow this volume to be repeatedly scanned. In the first embodiment, such a sweep over the area of interest enables a plurality of two-dimensional planes to be obtained and hence for the system to create a 3D model data block of the volume of interest without the user having to manually manipulate the ultrasound transducer 102. By programming this sweep to be periodic, the volume of interest may be scanned repeatedly, to allow a standard 2d ultrasound transducer to be used to image a moving 3d volume of the body.
In current needling systems the needle is inserted adjacent to the probe to allow it to be visualised easily. The needle is guided along or into the scan plane of the ultrasound transducer 102 by visually aligning its shaft with the ultrasound transducer 102 or by using a mechanical guide.
In accordance with either of the first or second embodiments, when using a rendered image containing either a plane not aligned with the ultrasound beam, or a volumetric rendering of the 3D data block, the needle 110 may be visualised even if it is inserted away from the transducer and its trajectory no longer needs to be aligned with the scan plane of the ultrasound transducer 102, since data out-of-plane of the scan plane is made visible.
Therefore, the ultrasound transducer may be placed in a position to best optimise the information that can be inferred from the rendered image whilst the needle may be placed in a position to simplify its safe insertion by avoiding intermediate structures.
This enables certain procedures to be simplified.
One example of such a procedure is a needle biopsy of a kidney. A costal position provides a better view of the kidney than would be obtainable from the patient’s back as the muscle
-26density in a patient’s back is too high to enable a good ultrasound view of the kidney to take place. It is, however, desirable to insert the needle into the kidney through the back as this significantly reduces the chance of internal bleeding.
Another example of such a procedure is a liver needling. A large portion of the liver typically lies under the ribs of a patient and it is not possible to obtain good ultrasound images through the ribcage. In taking the rendered view from the sub-costal position, a rendered view of the liver can be generated which can enable a good image of the liver to be obtained whilst inserting the needle intercostally.
Optionally, the user may provide an input to the needling system 200 from the headset 104 to indicate a needling target on the rendered view. This is illustrated in Figure 5 where the ultrasound transducer 102 is in position over a user’s arm and a nerve 500 which is to be subject of a peripheral nerve block is part of the rendered image 502 generated in steps S214 or S414.
The input may be a voice command which can be detected by the headset 104 or a gesture input which can also be detected using the motion sensing capability or camera of the headset 104. As an example, using a Microsoft HoloLens, the gaze cursor may be used to provide the location of the object to be marked. Alternatively, voice or gesture input may be used. The resultant location information is fed into the control module 106 and subsequently to the indicia adder sub-module 106f.
Responsive to receiving the input, the headset 104 transmits a request to the control module 106 via the headset interface 130, the control module 106 is then configured to feed a request to the indicia adder sub-module 106f for a marker to be added to the rendered view.
The indicia adder sub-module 106f then adds the marker 504 to the 3D model data block at the indicated location in the 3D model data block as part of steps S212 or S414 and transmits the 3D model data block to the image compositor sub-module 106f in the step S214 or S414 where the rendered view is generated with the addition of the marker 504 at the indicated location.
-27The control module 106 maintains the presence of the marker at a static position in worldspace on the generated rendered view until further user input indicates that the marker should be removed.
The user may then issue further user input indicative of a desire to insert a line onto a rendered view to identify a suitable position and direction for a needle to be inserted. The user input may use the rich range of user input features provided by the headset 104, for example, the gesture or voice recognition input capabilities of the headset 104
Responsive to receiving the input, the headset 104 transmits a request to the control module 106 via the headset interface 130, the control module 106 is then configured to feed a request to the indicia adder sub-module 106f for a line 506 to be added to the rendered view. The indicia adder sub-module 106f then calls a line addition routine which receives the data regarding the marker as an input.
In generating the line 506, the image segmentation sub-module 106d may issue a call to the tracking module 107 to obtain data indicative of the orientation of the ultrasound transducer 102. The image segmentation sub-module 106d may use the information concerning the orientation of the ultrasound transducer 102 as an input to the image segmentation routine which will then generate the line 506 so that it at a configurable angle to the scan plane of the ultrasound transducer 102 to maximise the ultrasound reflectance and hence enhance the visibility of the needle 110.
The control module 106 maintains the presence of the marker 504 and the line 506 on the generated rendered view until further user input indicates that the marker 504 and the line 506 should be removed.
Optionally, the headset 104 may enable the user of the needling system 200 to interact with the system to modify the scene.
-28Optionally, a user may wish to remove a rendered view from the scene. In order to do this, the user may issue a voice command “remove image” which is detected by the microphone of the headset 104 and, responsive to receiving this command, the headset 104 will switch off the display of the rendered view generated in step S212 or S412. Responsive to the voice command “reinstate image”, which is detected by the microphone of the headset 104, the headset will reinstate the display of the rendered view which has been contemporaneously generated in steps S200 to S216 or steps S400 to S416.
Alternatively or additionally, a user may provide input to the headset 104 to indicate that some of the data should be de-emphasised from a rendered view generated in steps S214 and S414 to generate a cut-away view.
Optionally, a user may provide input to the visualisation system 100 to indicate that they would like the position of an object, such as the needle, to be highlighted. We discuss this optional feature in the context of tracking a needle but this is only intended to be illustrative.
The tracking of a feature is described with reference to Figure 10.
Responsive to receiving the input, the control module 106 issues a request to the image segmentation sub-module 106c for the position of the needle to be tracked in a step S1000.
In a step S1002, the image segmentation sub-module 106c retrieves previously time-stamped 3D model data blocks.
In a step S1004, the image segmentation sub-module 106c calls a local thresholding routine which is applied to each of the retrieved 3D model data blocks. The local thresholding routine segments each of the 3D model data blocks using a local thresholding algorithm such as the one described in Bernsen, J.: ‘Dynamic thresholding of gray-level images’. Proc. 8th Int. Conf, on Pattern Recognition, Paris, 1986, pp. 1251-1255 or J. Sauvola and M. Pietikainen, “Adaptive document image binarization,” Pattern Recognition 33(2), pp. 225-236, 2000
-29In a step S1006, the image segmentation sub-module 106c calls a line extraction algorithm such as a random sample consensus (RANSAC) algorithm which outputs data for each of the 3D model data block. The data output for each of the 3D model data blocks indicates the possible lines that exist in the 3D model data block - which would include the lines which represent the track of a needle which is inside the body which was used to generate the 3D model data blocks. Another method which may be used for step S1008 is disclosed ain.pdf or https ://hal.archivesouvertes.fr/hal-00810785/document.
The data output from the line extraction algorithm for each of the 3D model data blocks can then be treated as a time-series of timestamped line data which can then be analysed using time-series based techniques.
In step S1008, the image segmentation sub-module 106c calls an auto-correlation routine which determines the auto-correlation for the timestamped line data. The determination of the auto-correlation of the timestamped line data provides the similarities between the identified lines in the successively timestamped 3D model data blocks.
In a step S1010, this data can be used by the image segmentation sub-module 106c to identify the position of the needle as it is statistically unlikely that the needle will move laterally inside the body. That is to say, line data which is indicative of lateral movement can be rejected which will leave only line data indicative of longitudinal movement.
The line data corresponding to the likely position of the needle is then output by the image segmentation sub-module 106c in a step S1012. The line data corresponding to the likely position of the needle can then be used, with reference to the respective 3D model data block to determine the position of the track, the orientation of the track and the length of the track of the needle, as well as the needle’s current position and orientation. The data relating to the track of the needle can then be used to predict the track the needle will likely take.
-30The image segmentation sub-module 106c may also determine the velocity of the needle by applying a routine to determine the rate of change of the position of the track.
The effect of using the image segmentation sub-module 106c to determine the position, orientation and length of the track is that external tracking of the object of interest, e.g., by using a magnetic or optical tracking system, may not be necessary as the execution of steps S1000 to S1012 provides an output which is indicative of the position of the object of interest.
The data indicating some or all of the position, velocity, orientation and length of the track is then passed to the image compositor sub-module 106e in steps S214 and S414 where the data is combined with the data from the respective sub-modules to generate the rendered view.
The steps S1000 to S1012 are then iterated for every 3d model data block received by the image segmentation sub-module 106c until the user indicates through user input that they do not wish for the tracking to take place any more.
The rendered view generated by the execution of steps S214 and S414 can then be generated to contain an image enhancement to identify the needle. Further image enhancement can also be used to display the track of the needle.
Steps 1006 to S1010 can be combined using the ROI-based RANSAC and Kalman method disclosed in Automatic needle detection and tracking in 3D ultrasound using an ROI-based RANSAC and Kalman method. (Ultrason Imaging. 2013 Oct;35(4):283-306. doi: 10.1177/0161734613502004).
Optionally, the control module 106 and the headset 104 may be configured to enable features of the scan data to be extracted and emphasised on the rendered view which is displayed in steps S216 and S416.
-31A user wearing a headset 104 may indicate, using a voice command or a gesture, that a region or feature of the generated rendered view is of particular interest. The discussed example of the feature is a needle but this can be applied to any or multiple features of interest within an ultrasound scan.
If a user wanted to identify a needle on the generated rendered view, they may input a voice command “identify needle” during a needling procedure being carried out using the needling system 200. This voice command may be detected by the headset 104. Alternatively the user may gesture in a specified manner which is detected by the motion sensing capability of the headset 104.
Responsive to receiving such a command, the headset 104 issues a request to the control module 106 via the headset interface 130 for the identification of the needle on relevant rendered views. The control module 106 feeds a request for the identification of the needle to the image reslicer sub-module 106d.
As part of the step S212 (or step S412), the image reslicer sub-module 106d calls a needle identification routine. The needle identification routine applies an image reslicing routine such as, for example, nearest neighbour reslicing routine such as the one described in https://camtools.cam.ac.uk/access/content/group/d4fe6800-4ce2-4bad-8041-
95751 Qe5aaed/Pub Hc/3G4/3G4 lab handout 13.pdf to extract the plane of the 3D model data block which is aligned with the longitudinal axis, i.e. the length, of the needle.
The data output from the image reslicer sub-module 106d then outputs the data in step S212 (or step S412) including the plane of the 3D model data block containing the longitudinal axis of the needle 110.
The rendered view formed by the image compositor sub-module 106e in steps S214 (or step S414) is then generated with the plane of the 3D model data block containing the needle visible on the rendered view.
-32The information relating to the plane containing the needle may be fed to the probe cradle 120 and used by the motor to align the ultrasound transducer 102 accordingly in the correct orientation.
Alternatively or in addition to the plane containing the needle, a user may provide input indicating that they would like to see a one or more planes that are at configurable angles to the plane coincident with the longitudinal axis of the needle. These planes may include a plane which is orthogonal to the plane containing the longitudinal axis of the needle. Input may be provided to the image reslicing routine to generate the indicated other planes.
A user may decide they would like to enhance the rendered view using false colour or another type of image enhancement. They may indicate this to the headset 104 through voice input or other command. The headset 104 then transmits a request to the control module 106 via the headset interface 130.
The control module 106 then issues a request to the image segmentation sub-module 106d, which calls an image enhancement routine which is configured to apply false colour or another form of image enhancement to a portion of the 3D model data block in the step S212 (or step S412).
The image enhancement routine may apply thresholding, alpha-blending, clustering or compression-based methods to the received volumetric data block to generate a segmented volumetric data block which is then transmitted to the volume rendering sub-module 106c. The image enhancement routine may also apply the image segmentation method described in https://www.ncbi.nlm.nih.gOv/pubmed/l 1548934.
The rendered view can then be generated as in steps S212 or S412 with the added image enhancement.
An example of a rendered view which includes the identification and enhancement of a needle is illustrated in Figure 6. In this example, the user is using the visualisation system 100 to target vessel 606 and can see vessel 606 through the field of view of the headset 104
-33as the vessel is appearing on the rendered view displayed in steps S216 and S416. The user has used the ultrasound transducer 102 (with scan plane 620) with the visualisation system 100 to identify the plane containing the needle 110 and it is being highlighted with an outline generated by the image segmentation sub-module 106d in step S212 (or S412). That is to say, the shaft of the needle 650 is highlighted in outline so it is easier to see as it is being used as part of a needling procedure on vessel 606.
Another example of a rendered view is illustrated in Figure 7. The plane containing the needle is enumerated by 702. The plane which is transverse to the plane containing the needle is enumerated by 704. The region of interest is identified by 706.
Another example of such a rendered view is provided in Figure 8 where the emphasised region of interest is enumerated as 802 in the rendered view 800.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be capable of designing many alternative embodiments without departing from the scope of the invention as defined by the appended claims. In the claims, any reference signs placed in parentheses shall not be construed as limiting the claims. The word comprising and comprises, and the like, does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. In the present specification, “comprises” means “includes or consists of’ and “comprising” means “including or consisting of’. The singular reference of an element does not exclude the plural reference of such elements and vice-versa. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Claims (37)
1. Visualisation system configured to:
receive scan data from a scanning portion representative of an interior portion of a body;
receive a first set of positional and orientational data indicative of the positions of at least one user;
receive a second set of positional and orientational data indicative of the position of a scanning portion;
generate one or more rendered views of the interior portion using the scan data and the first and second sets of positional and orientational data;
receive input from the at least one user indicative of a change in their viewpoint or a desire to modify one or more features of the rendered views;
modify the rendered views responsive to the user input; and combine the rendered views into a scene for display by the system.
2. System according to Claim 1, wherein the system further comprises a support portion arranged to support the scanning portion.
3. System according to Claim 2, wherein the support portion is arranged to move the scanning portion.
4. System according to Claim 3, wherein the support portion is arranged to receive positional data identifying a specified scan plane; and the support portion is arranged to move the scanning portion into the specified scan plane for the generation of scan data.
5. System according to Claim 4 wherein the support portion is arranged to periodically move the scanning portion to repeatedly scan over a volume of interest.
6. System according to any of Claims 2 to 5, wherein the support portion is coupled to a support arm arranged to hold the support portion in a desired position.
7. System according to any of Claims 2 to 6 wherein the support portion is arranged to generate the positional and orientational data indicative of the position and orientation of the scanning portion.
8. System according to any preceding claim, wherein the system further comprises a needle.
9. System according to any preceding claim, wherein the scan data is three-dimensional ultrasound data.
10. System according to any of Claims 1 to 8 wherein the scan data comprises a plurality of two-dimensional frames from a scanning portion and positional data indicating the position and orientation of said scanning portion.
11. System according to any preceding claim, wherein the input received from the user identifies at least one target location in the interior portion of the body and the system is further configured to generate and display indicia to identify the at least one target location as part of the generation of a rendered view.
12. System according to Claim 11 wherein the indicia comprises a text portion displaying information relating to the target location.
13. System according to Claim 11 wherein the indicia comprises a geometric shape.
14. System according any preceding claim where the system is configured to display additional rendered views displaying data related to the body are combined into the scene.
15. System according to any preceding claim wherein the system is configured to display rendered views comprising one or more statically located views of configurable size.
16. System according to any previous claim wherein the system is configured to display a rendered view, wherein the rendered view is generated based on the position and orientation of another user.
17. System according to any previous claim wherein the scene comprises a rendered view comprising a volumetric representation of the body.
18. System according to any preceding claim wherein the scene is displayed on at least one headset.
19. System according to Claim 18 wherein the at least one headset comprises a plurality of headsets.
20. System according to Claim 18 or Claim 19 wherein the system is further configured to:
receive positional data from one or more headsets;
generate a rendered view for each of the one or more headsets using the scan data and the positional data from the respective headset.
21. System according to Claim 20, wherein the system is further configured to:
receive input from one or more headsets, the user input indicating a modification to a rendered view; and modify the rendered view for the respective headset responsive to the input from the respective headset.
22. System according to any of Claims 18 to 21 wherein the at least one headset is an augmented reality headset.
23. System according to any preceding claim wherein the input from the user identifies a target for insertion of a needle.
24. System according to Claim 23, wherein the system is configured to, responsive to receiving the input identifying a target for insertion of a needle, overlay a marker onto the target.
25. System according to Claim 24 wherein the system is further configured to generate a path identifier from the exterior of the body to the target for insertion of the needle.
26. System according to Claim 25 wherein the path identifier is generated to be at a configurable angle to a scan plane of a scanning portion.
27. System according to any preceding claim wherein the input from the at least one user comprises a request to track an item and the system is configured to analyse the scan data to extract the data indicative of the track of the item of interest.
28. System according to Claim 27, wherein system is configured to:
receive the request to track the item;
retrieve a plurality of chronologically sequenced three-dimensional data blocks, each data block corresponding to an instance in time at which the three-dimensional data block was generated;
extract a plurality of tracks corresponding to possible tracks of the item determine the correlation of the plurality of chronologically sequenced possible tracks to extract the most likely track of the item from the plurality of possible tracks.
29. System according to Claim 28 wherein the system is configured to highlight the position of the item in one or more rendered views
30. System according to Claim 28, wherein the system is configured to highlight the most likely track of the item in one or more rendered views.
31. System according to any preceding Claim wherein the scene comprises a rendered view displaying a three-dimensional representation of the interior portion of the body
32. System according to Claim 31 wherein the scene comprises a rendered view comprising a de-emphasised portion of at least part of the scan data to generate a cutaway view.
33. System according to Claim 32 wherein the scene comprises of a rendered view comprising the emphasis of at least part of the scan data.
34. System according to any preceding claim wherein the scene comprises a rendered view comprising a plurality of planes.
35. System according to Claim 34 wherein the plurality of planes is selected by a user.
36. System according to Claim 34 wherein the plurality of planes comprises a plane coincident with an object of interest.
37. System according to Claim 34 wherein the plurality of planes comprises a plane at a configurable angle to a plane coincident with an object of interest.
Intellectual
Property Office
Application No: GB 1707869.2
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1707869.2A GB2562502A (en) | 2017-05-16 | 2017-05-16 | Visualisation system for needling |
PCT/GB2018/050731 WO2018211235A1 (en) | 2017-05-16 | 2018-03-21 | Visualisation system for needling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1707869.2A GB2562502A (en) | 2017-05-16 | 2017-05-16 | Visualisation system for needling |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201707869D0 GB201707869D0 (en) | 2017-06-28 |
GB2562502A true GB2562502A (en) | 2018-11-21 |
Family
ID=59201629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1707869.2A Withdrawn GB2562502A (en) | 2017-05-16 | 2017-05-16 | Visualisation system for needling |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2562502A (en) |
WO (1) | WO2018211235A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200230391A1 (en) * | 2019-01-18 | 2020-07-23 | Becton, Dickinson And Company | Intravenous therapy system for blood vessel detection and vascular access device placement |
US20200352655A1 (en) * | 2019-05-06 | 2020-11-12 | ARUS Inc. | Methods, devices, and systems for augmented reality guidance of medical devices into soft tissue |
CA3140626A1 (en) * | 2019-05-31 | 2020-12-03 | Tva Medical, Inc. | Systems, methods, and catheters for endovascular treatment of a blood vessel |
DE102020109593B3 (en) * | 2020-04-06 | 2021-09-23 | Universität Zu Lübeck | Ultrasound-Augmented Reality-Peripheral Endovascular Intervention-Navigation Techniques and Associated Ultrasound-Augmented Reality-Peripheral Endovascular Intervention-Navigation Arrangement |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3015070A1 (en) * | 2014-10-31 | 2016-05-04 | Samsung Medison Co., Ltd. | Ultrasound system and method of displaying three-dimensional (3D) image |
US20160225192A1 (en) * | 2015-02-03 | 2016-08-04 | Thales USA, Inc. | Surgeon head-mounted display apparatuses |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7251352B2 (en) * | 2001-08-16 | 2007-07-31 | Siemens Corporate Research, Inc. | Marking 3D locations from ultrasound images |
US7079132B2 (en) * | 2001-08-16 | 2006-07-18 | Siemens Corporate Reseach Inc. | System and method for three-dimensional (3D) reconstruction from ultrasound images |
US20060176242A1 (en) * | 2005-02-08 | 2006-08-10 | Blue Belt Technologies, Inc. | Augmented reality device and method |
US10314559B2 (en) * | 2013-03-14 | 2019-06-11 | Inneroptic Technology, Inc. | Medical device guidance |
US10154239B2 (en) * | 2014-12-30 | 2018-12-11 | Onpoint Medical, Inc. | Image-guided surgery with surface reconstruction and augmented reality visualization |
WO2016133847A1 (en) * | 2015-02-16 | 2016-08-25 | Dimensions And Shapes, Llc | Systems and methods for medical visualization |
EP3265011A1 (en) * | 2015-03-01 | 2018-01-10 | Aris MD, Inc. | Reality-augmented morphological procedure |
CN106063726B (en) * | 2016-05-24 | 2019-05-28 | 中国科学院苏州生物医学工程技术研究所 | Navigation system and its air navigation aid are punctured in real time |
-
2017
- 2017-05-16 GB GB1707869.2A patent/GB2562502A/en not_active Withdrawn
-
2018
- 2018-03-21 WO PCT/GB2018/050731 patent/WO2018211235A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3015070A1 (en) * | 2014-10-31 | 2016-05-04 | Samsung Medison Co., Ltd. | Ultrasound system and method of displaying three-dimensional (3D) image |
US20160225192A1 (en) * | 2015-02-03 | 2016-08-04 | Thales USA, Inc. | Surgeon head-mounted display apparatuses |
Also Published As
Publication number | Publication date |
---|---|
GB201707869D0 (en) | 2017-06-28 |
WO2018211235A1 (en) | 2018-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11759261B2 (en) | Augmented reality pre-registration | |
CA2202052C (en) | Video-based surgical targeting system | |
US6690960B2 (en) | Video-based surgical targeting system | |
CN1689518B (en) | Method for augmented reality instrument placement using an image based navigation system | |
EP0741540B1 (en) | Imaging device and method | |
US20050203380A1 (en) | System and method for augmented reality navigation in a medical intervention procedure | |
GB2562502A (en) | Visualisation system for needling | |
EP2372660A2 (en) | Projection image generation apparatus and method, and computer readable recording medium on which is recorded program for the same | |
JP2018514352A (en) | System and method for fusion image-based guidance with late marker placement | |
US11340708B2 (en) | Gesture control of medical displays | |
EP3789965A1 (en) | Method for controlling a display, computer program and mixed reality display device | |
US20230114385A1 (en) | Mri-based augmented reality assisted real-time surgery simulation and navigation | |
NL2022371B1 (en) | Method and assembly for spatial mapping of a model of a surgical tool onto a spatial location of the surgical tool, as well as a surgical tool | |
CN106068098B (en) | Region visualization for ultrasound guided procedures | |
JP6476125B2 (en) | Image processing apparatus and surgical microscope system | |
US12112437B2 (en) | Positioning medical views in augmented reality | |
US11869216B2 (en) | Registration of an anatomical body part by detecting a finger pose | |
Ni et al. | An ultrasound-guided organ biopsy simulation with 6DOF haptic feedback |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |