WO2024089423A1 - System and method for three-dimensional imaging - Google Patents

System and method for three-dimensional imaging Download PDF

Info

Publication number
WO2024089423A1
WO2024089423A1 PCT/GB2023/052794 GB2023052794W WO2024089423A1 WO 2024089423 A1 WO2024089423 A1 WO 2024089423A1 GB 2023052794 W GB2023052794 W GB 2023052794W WO 2024089423 A1 WO2024089423 A1 WO 2024089423A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
structures
dimensional
probe
Prior art date
Application number
PCT/GB2023/052794
Other languages
French (fr)
Inventor
Jason CARRIER
Original Assignee
Through Leaves Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB2215790.3A external-priority patent/GB2623771A/en
Priority claimed from GB2215789.5A external-priority patent/GB2623770A/en
Application filed by Through Leaves Limited filed Critical Through Leaves Limited
Publication of WO2024089423A1 publication Critical patent/WO2024089423A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • Embodiments described herein relate to methods and systems for three-dimensionally imaging a subject.
  • Three dimensional ultrasound is a medical imaging technique which is often used in foetal, cardia or intra-vascular applications.
  • Existing three-dimensional ultrasound imaging techniques involve controlling the direction of a sequence of ultrasound images using a beam steering means or a motorised gimbal that is integrated with an ultrasound probe.
  • Such sophisticated methods may be expensive, cover only limited angular ranges, and require a specially trained medical professional to perform them, limiting their accessibility.
  • An aim of the present invention is to provide means for improving three-dimensional imaging techniques.
  • a first aspect provides a method for combining a plurality of three-dimensional images (e.g. ultrasound images) of a subject into a single three dimensional image.
  • Such a method enables a composite image to be constructed from first and second three-dimensional images of a subject that show one or more of the same features, without requiring the relative arrangement of the first and second three dimensional images to be known.
  • a method of combining first and second three-dimensional images of a subject comprising: identifying one or more shared structures of the subject shown in both of the first and second images by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three-dimensional images; determining a transformation mapping a three- dimensional representation of the one or more shared structures in the first image onto a three- dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
  • the artificial intelligence comprises a machine learning model, which may receive each of the first and second images as inputs and identify the structures within each of the first and second images.
  • Using the artificial intelligence comprises providing each of the first and second images as inputs to the machine learning model.
  • the structures identified in the images may be three-dimensional structures identified in each of the images.
  • the structures may be structures identified two-dimensional slices, where sets of those two-dimensional slices each form a respective one of the three-dimensional images.
  • a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according the first aspect.
  • a system for combining first and second three-dimensional images of a subject configured to: identify one or more shared structures of the subject shown in both of the first and second images by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three-dimensional images; determine a transformation mapping a three- dimensional representation of the one or more shared structures in the first image onto a three- dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
  • the system configured to: identify using artificial intelligence a preselected group of structures corresponding to a region of the subject imaged in the first and second images.
  • identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures.
  • identifying the structures using artificial intelligence comprises using artificial intelligence to identify the structures within each of a set of two-dimensional slices defining the respective three-dimensional image.
  • the artificial intelligence is a machine learning model trained using a training data set comprising three-dimensional images.
  • the artificial intelligence is a three-dimensional segmentation artificial intelligence.
  • the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
  • the system is configured to: identify the one or more shared structures of the subject shown in both the first and second images by identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
  • the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
  • the first and second point clouds are derived from segmented- out copies of the one or more shared structures in the first and second images respectively.
  • the first and second point clouds are derived from segmented- out copies of the one or more shared structures in the first and second images respectively that are output by the artificial intelligence.
  • a fourth aspect provides a method for performing ultrasound imaging of a subject to obtain a three-dimensional image in which artefacts, e.g. motion artefacts resulting from the subject moving during the capturing of the images, are compensated for.
  • Such a method may enable a three-dimensional representation of the subject to be captured by arranging the three-dimensional images in their associated orientations and compensating for lateral offsets between one or more of the images. By varying the orientation of the ultrasound probe a larger amount as the series of images are captured, a three-dimensional representation with a larger field of view may be obtained.
  • a method of imaging a subject comprising: capturing a series of ultrasound images using an ultrasound probe; determining a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured, each orientation being derived from measurements of an accelerometer comprised by the ultrasound probe; arranging each of the series of captured ultrasound images in a three-dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured; identifying a feature of the subject in a plurality of the captured ultrasound images; comparing the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images; and translating one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
  • the method comprises capturing a series of ultrasound images using an ultrasound probe whilst the ultrasound probe is rotated within a plane substantially perpendicular to the probe’s imaging plane throughout the motion.
  • a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to the fourth aspect.
  • a system for imaging a subject comprising: an ultrasound probe configured to capture a series of ultrasound images; an accelerometer, comprised by the ultrasound probe; an output device for providing instructions to a user; and a processor, the processor configured to: use measurements of the accelerometer to derive a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured; arrange each of the series of captured ultrasound images in a three-dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured; identify a feature of the subject in a plurality of the captured ultrasound images; compare the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images; and translate one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
  • the system comprises capturing a series of ultrasound images using an ultrasound probe whilst the ultrasound probe is rotated within a plane substantially perpendicular to the probe’s imaging plane throughout the motion.
  • FIG. 1 is a flowchart of an embodiment of a method of combining three dimensional images
  • Fig. 2a is a three-dimensional image of a left lobe of a patient’s liver;
  • Fig. 2b is a three-dimensional image of a right lobe of the patient’s liver shown in Fig. 2a;
  • Fig. 3a shows a three-dimensional representation of a gallbladder extracted from the image of Fig. 2a;
  • Fig. 3b shows a three-dimensional representation of a gallbladder extracted from the image of Fig. 2b;
  • FIG. 4 shows a combination of the images of Figs. 2a and 2b using a transformation based on a transformation generated from the extracted gallbladder representations of Figs. 3a and 3b;
  • Fig. 5 shows a illustrates a method as described herein in which two three-dimensional anatomical images, or “voxels” are combined to produce a larger contiguous anatomy
  • Fig. 6a shows an ultrasound probe for use in imaging a subject
  • Fig. 6b shows a user holding the ultrasound probe of Fig. 6a against a subject and rotating it through a range of orientations
  • Fig. 6c is a diagram showing a range of motion through which the ultrasound probe of Fig. 1 may be moved in use;
  • FIG. 7a is a flowchart of an embodiment of a method for imaging a subject; [0039] Fig. 7b shows additional optional steps of the method of Fig. 2a; and
  • Fig. 8 shows a cross sectional view of a three-dimensional arrangement.
  • FIG. 1 generally, embodiments of methods and systems for combining multiple three-dimensional images of a subject, in which structures within first and second images are identified, and used to determine a transformation for mapping one of the images onto the other.
  • Such methods and systems may be used to combine overlapping three dimensional images of a subject, such as three-dimensional ultrasound images of adjacent parts of a patient’s body into a single combined image, without requiring the images to be captured in a known or fixed orientation.
  • the three dimensional images may each be or comprise a three dimensional array; a voxel based three-dimensional image, a point cloud, an array of two-dimensional tomographic images or slices in a fixed arrangement, or any other suitable three-dimensional image format.
  • the image may comprise a series of two-dimensional tomographic images or slices captured as an imaging device is moved and/or rotated relative to a subject.
  • Individual tomographic images or slices may have associated labels or timestamps associated therewith which may indicate their arrangement within the three-dimensional image.
  • the image may be in a file format such as . nifti , .npy, or .ply, which are widely used in three-dimensional medical imaging.
  • the method may comprise transforming one or both of the images into such a format.
  • the three dimensional images may be medical images and the subject may be a patient’s body, such as a human patient’s body, wherein the methods or systems are medical imaging methods or systems.
  • the subject may be a non-human animal body or an inanimate device or object.
  • the two three- dimensional images may be of different, but at least partially overlapping regions of the subject.
  • the three dimensional images may be magnetic resonance imaging (MRI) images, computerized tomography (CT) scan images, ultrasound images, positron emission tomography (PET) images, or any other suitable form of three-dimensional images.
  • MRI magnetic resonance imaging
  • CT computerized tomography
  • PET positron emission tomography
  • one or both the three dimensional images may comprise associated metadata, for example, indicating a portion of a patient shown in the subject, and/or an orientation of the three-dimensional images relative to the subject.
  • FIG. 1 shows a flowchart 100 of a method of an embodiment of a method of combining two three-dimensional images of a subject.
  • a first step 110 of the method one or more structures of the subject shown in both of the two images are identified.
  • the structures may be anatomical structures of an imaged body, such as individual organs, blood vessels and/or components thereof.
  • Identifying the shared structures identifies parts of the subject that are at least partially shown in both images, enabling the different positions and orientations of these structures in the two images to be used to determine the relative positions and orientations of the portions of the subject shown in the two images.
  • Identifying the structures shown in both of the images may comprise identifying a first set of structures shown in the first image and a second set of structures shown in the second image. The first and second sets may then be compared to identify those structures shown in both the first and second images. Identifying structures in each of the first and second images may comprise detecting and labelling the detected structures.
  • identifying structures in the first and second images may comprise identifying parts of said images showing any of a preselected group of structures.
  • the preselected group of structures may correspond to the region of the subject being imaged, and different preselected groups of structures may be used for images of different parts of the subject.
  • the preselected group of structures may comprise the liver, the inferior vena cava (I VC) , the aorta, the hepatic veins, the portal veins, and/or the kidney.
  • identifying structures in the first and second images may comprise identifying geometric structures within the images, which may not be part of a preselected group.
  • the structures may be identified within the two three-dimensional images using an artificial intelligence system configured to identify structures of the subject (such as structures of a preselected group as described above) within three-dimensional images, such as a machine learning model trained using a data set comprising a plurality of three-dimensional images as described herein.
  • the structure identifying artificial intelligence is an artificial intelligence configured and/or trained to perform semantic segmentation of three-dimensional images, such as a ll-Net or V-Net convolutional neural network or variant thereof.
  • the artificial intelligence may receive a three-dimensional image as an input and may output the identity and location within the image of each structure identified therein.
  • the structure-identifying artificial intelligence may be configured to output a segmented-out copy of one, some or each of the structures identified in the image.
  • the segmented-out copy of a structure comprising the portion of the input three-dimensional image occupied by that structure, with the remainder of the image substantially removed.
  • Three-dimensional images may be input to the artificial intelligence in a three- dimensional array format such as an .npy file.
  • the method may comprise converting the three-dimensional images into such a three-dimensional array format, such as an .npy file, if they are not already in such a format. Any segmented out copies of identified structures may also be output in such a three-dimensional array format.
  • the structure-identifying artificial intelligence may be a machine learning model trained using a training data set comprising three-dimensional images, such as three-dimensional medical images, and associated identities and locations of a set of structures in each image.
  • the training data may further comprise segmented out copies of one, some, or each of the set of structures in each image.
  • the structure-identifying artificial intelligence is configured and/or trained to identify structures in medical images of a specific part of a patient’s body.
  • a plurality of such structure-identifying artificial intelligences may be configured and/or trained to identify structures in images of different parts of a patient’s body, and the structures in the first and second images may be identified using artificial intelligences corresponding to the parts of a patient’s body shown therein.
  • the structures may be identified in each image of an array of two-dimensional tomographic images or slices defining a three-dimensional image, using an artificial intelligence or machine learning system configured to identify structures of the subject (such as structures of a preselected group as described above) within two-dimensional images or slices, for example, an artificial intelligence configured and/or trained to perform semantic segmentation of two-dimensional images.
  • the structures identified in a three-dimensional image may be all those identified in the two-dimensional images or slices defining said three-dimensional image, and the location of an identified structure within the three-dimensional image may be defined by the location of portions of that structure shown in one or more of the two-dimensional images or slices.
  • structures may be identified within the two images based on boundaries between regions within the three-dimensional images.
  • one or more shared structures that have been identified in both the first and second images is determined.
  • the one or more shared structures may be all of the structures that have been identified in both of the two images.
  • the one or more shared structures may be a subset of the structures that have been identified in both of the two images.
  • a three-dimensional representation of the one or more shared structures in the first image and a three-dimensional representation of the one or more shared images in the second image are used to determine a transformation for mapping the shared structures in one of the images onto the identified structures in the other of the images.
  • the transformation is for mapping the shared structures in the first image onto the shared structures in the second image.
  • the one or more shared structures may be a plurality of shared structures (or a plurality of shared structures whenever more than one structure has been identified in both of the images), which may improve the precision and/or accuracy with which the mapping is determined.
  • the three-dimensional representation of the shared structures in each image may represent all of the determined shared structures in that image, in the locations that they were identified within that image.
  • the three-dimensional representations may exclude portions of the images other than the shared structures. This may provide sparsity into the representations, which may facilitate determining the mapping from one image to the other.
  • the three-dimensional representations of the shared structures in the two images may be, or may be derived from, segmented out copies of those structures identified in the image.
  • the segmented-out copies of the structures comprising the portions of the input three-dimensional image occupied by those structures, substantially excluding elements other than the structures.
  • Such segmented out copies of the structures are kept spatially congruent to their positions in the original images.
  • the segmented-out copies of the shared structures may be output by a structure identifying artificial intelligence as described above. Segmenting out the shared structures in this manner may introduce sparsity into the images and the representations of the shared structures therein as described above.
  • the three-dimensional representation of the one or more shared structures within an image may be derived from individual segmented-out copies of each of the one or more shared structures in that image, for example, as output by the structureidentifying artificial intelligence when each structure is identified.
  • a segmented-out copy of all of the one or more shared structures may be derived from each image, and may define or be used to derive the three-dimensional representation of the one or more shared structures.
  • the three dimensional representations of the one or more shared structures in the first and second images may be first and second point clouds respectively.
  • the method may comprise transforming them into a point cloud representation.
  • Figs 2a and 2b show examples of two overlapping three-dimensional medical images 200, 250 of portions of a patient’s liver.
  • Fig. 2a shows the left lobe of the liver and fig. 2b shows the right lobe of the liver.
  • Both of the two three-dimensional images include the patient’s gallbladder.
  • Figs 3a and 3b each show front and rear views of an extracted segmented-out copies of a gallbladder 300, 350 that has been identified in each of the two images 200, 250 and extracted therefrom.
  • the segmented out gallbladders 300, 350 are three-dimensional representations of the gallbladders identified in the two images.
  • the segmented out gallbladders 300, 350 retain the locations in which they are located in the original images 200, 250, such that a mapping or transformation of one segmented out gallbladder 300, 350 to the other will also accurately map the two images 200, 250 onto each other.
  • the first and second three-dimensional representations of the one or more shared structures may be down-sampled before being used to determine the transformation, preferably in the same manner.
  • Point cloud representations may be downsampled to between 5,000 and 10,000 points, for example, from multi-million-point representation obtained from the images.
  • the transformation may be determined using an artificial intelligence and/or machine learning model, which may be configured to receive the two three-dimensional representations of the identified structures in the two images, such as point cloud representations, as inputs and to output the transformation for mapping one three-dimensional representation onto the other as an output.
  • the transformation-determining artificial intelligence and/or machine learning model may be a point cloud registration artificial intelligence, such as a PointNet neural network or variant thereof.
  • the transformation for mapping one point cloud onto the other may be a transformation matrix.
  • the transformation-determining artificial intelligence may be a machine learning model trained using a training data set comprising three-dimensional representations of objects in two different locations, such as point cloud representations, which may comprise representations of anatomical features, for example from performing steps 110, 120 of the method as described above, and/or generic non-anatomical structures, such as artificially created three-dimensional representations of abstract three-dimensional shapes.
  • training the transformation-determining machine learning model may comprise initially training the model using three-dimensional representations, such as point cloud representations, of generic non-anatomical structures and subsequently training the model using three-dimensional representations of anatomical structures. This training approach may quickly train the model to perform transformations before fine-tuning it for use on anatomical structures as in the present invention.
  • a third step 130 of the method 100 the determined transformation is used to transform one of the two images and the transformed image is combined with the other image to form a composite image.
  • the second image is transformed and is then combined with the first image to form a composite image.
  • Transforming one of the images using the determined transformation effectively results in that image having the same coordinate system as the other image.
  • the two images may then be combined, for example by overlaying them to result in a composite image.
  • the combined image provides a contiguous three-dimensional image covering the portions of the subject shown in both of the two objects.
  • the combined image may be subsequently saved or output in any suitable format.
  • the two images may be combined in three dimensional array formats, such as .npy files.
  • Combining the two images may comprise may comprise defining an array and adding the two images to the array.
  • the array may be defined with a size sufficient to contain both three dimensional images in their entirety. At points where the two combined images overlap, values of one the two images may be selected, or the values the two images may be averaged, in some embodiments with a weighted average.
  • FIG. 4 shows a composite image produced by combining the three-dimensional images shown in Figs 2a and 2b after the image of the right lobe of liver shown in Fig. 2b has been transformed using a transformation determined from point cloud representations off the extracted gallbladders shown in Figs. 3a and 3b.
  • the method may comprise further combining the composite image with another image in the same manner as described above. This may be performed by repeating the steps of the method with the composite image being one of the two images for said repeated steps. Such a process may be repeated any number of times to combine a range of images of different parts of the subject.
  • Fig. 5 is a flowchart of specific embodiment of a method 500 as described herein in which two three-dimensional anatomical images, or “voxels” are combined to produce a larger contiguous anatomy.
  • the method 500 comprises segmenting out and listing 510 anatomical structures of shown in each of the two images, and then comparing 520 the list of anatomical structures to shortlist those anatomical features present in both images. After this shortlist is created, a point cloud of the shortlisted anatomical structures is created 530 for each of the two images, and a point cloud registration neural network 540 is used to generate a transformation matrix from the point clouds. The transformation matrix is then applied 550 to the second of the three-dimensional images to align it with the first three-dimensional image and the aligned images are then combined. If more than two images are provided, the method may be repeated as needed 560 until all of the images have been combined.
  • FIG. 6 generally, embodiments of methods and systems for three- dimensionally imaging a subject are described, in which an ultrasound probe comprising an accelerometer captures both ultrasound images and orientation data, from which a three- dimensional image is constructed.
  • an ultrasound probe comprises at least one accelerometer, configured to detect changes in the orientation of the probe.
  • the accelerometer may be arranged adjacent an end of the probe at which ultrasonic acoustic waves are emitted and detected, for example proximate to, or adjacent, an acoustic lens of the probe.
  • the accelerometer is configured to detect changes in orientations of the probe in three or more dimensions, for example, the accelerometer may be a three-degrees of freedom (3DoF) accelerometer.
  • the accelerometer may be configured to record orientations of the probe, or changes therein, as sets of associated pitch, yaw and roll angles thereof.
  • the accelerometer may be configured to detect changes in orientations of the probe in fewer dimensions, for example, in only a single dimension.
  • the accelerometer may be configured to detect changes about only a single axis of the probe, in which case a user may only tilt the probe about such an axis in use.
  • the accelerometer is preferably configured to detect rotation of the probe about at least a short transverse axis of the probe parallel to an imaging plane thereof.
  • Fig. 6a shows an example of an ultrasound probe 600 for use in embodiments described herein.
  • the illustrated probe is a curvilinear phased array ultrasound probe, however it will be appreciated that any other ultrasound probe may be used, such as a linear ultrasound probe.
  • the probe is configured to be held and manipulated by a user, and/or by other suitable machinery, enabling the orientation of the probe to be varied in use, for example while held in contact with a subject. In use, the probe captures ultrasound images in orientations corresponding to the orientation in which it is held.
  • the illustrated probe 600 comprises an accelerometer 610 arranged adjacent a distal end, or “nose end”, 620 of the probe 600, at which the acoustic lens of the probe 600 is located.
  • Such an ultrasound probe 600 defines or is comprised by a system configured to three- dimensionally image a subject.
  • the system may further comprise a computing device, such as a terminal, which may be configured to control the ultrasound probe and/or to view, output or manipulate images captured by the probe, or composites thereof.
  • the system may be configured to perform an imaging method as described herein, for example by executing computer instructions stored in one or more storage media comprised by or associated with the system using one or more processors comprised by the system, for example, a non- transitory storage media such as a memory comprised by the computing device.
  • the ultrasound probe 600 and/or the computing device of the system is configured to output instructions and/or other information for a user.
  • Such instructions and/or information may comprise visual, audio, haptic, and/or pictographic elements.
  • the instructions may be output directly by the system, for example using an electronic visual display, graphical user interface (GUI), speaker or other output device comprised by, or connected to, the ultrasound probe and/or computing device, or may be output to another computing device in communication with the system, such as a personal computer or smartphone, in order to be output to a user by said other computing device.
  • GUI graphical user interface
  • FIG. 6b shows an example of the probe 600 being used to image a subject 650.
  • a user 640 holds the probe, such that its acoustic lens end 620 is pressed against or into the exterior of the subject 650, such as a portion of their own body. In this position, an imaging plane 630 in which the probe 600 captures ultrasound images extends into the interior of the subject 650.
  • the user holds the acoustic lens end 620 of the probe 600 in the same position on the subject 650 while tilting the probe 600 about an axis 660 parallel to the imaging plane 630 and tangential to the surface of the subject 650, such that tilting the probe 630 sweeps the imaging plane through the subject 650.
  • this axis 660 is a lateral axis of the probe 600, perpendicular to the length of its handle. The reorientation of the probe 600 and the imaging plane 630 as the probe is tilted about the axis 660 are shown with doubleheaded arrows.
  • Tilting an ultrasound probe 600 about lateral axis 660 of the probe 600 in this manner is sometimes referred to as “fanning” the probe 600.
  • the accelerometer 620 measures the changes in orientation of the probe 600 (and by extension, of the imaging plane 630).
  • Fig. 6c shows an example of the probe 600 being used to image the abdomen of a human patient 650.
  • the probe 600 may be arranged such that its imaging plane 630 is parallel to a longitudinal axis or midline of the body 650, and the probe 600 may be rotated about an axis 660 substantially parallel to the longitudinal axis or midline, such that all of the series of images are substantially parallel to said longitudinal axis or midline, and such that the ultrasound probe is rotated within a transverse plane 670 substantially perpendicular to the probe’s imaging plane throughout the motion.
  • Fig. 7a is a flowchart 700 of a method of imaging a subject 750.
  • the subject 750 is a patient’s body, such as a human patient’s body, and the methods or systems are medical imaging methods or systems.
  • the subject may be a non-human animal body or an inanimate device or object.
  • an ultrasound probe 700 comprising an accelerometer for detecting changes in its orientation is located on a region of interest on the subject 750.
  • the probe 600 is located in a target location on the region of interest, for example near a centre thereof.
  • the method 700 may comprise positioning the ultrasound probe 600 on the region of interest before the measurement steps 710, 715 are performed.
  • instructions on where to position the probe 600 on and/orwithin the region of interest are output by the system. For example, in image of a probe 600 in position may be presented to the user. Providing instructions on positioning the ultrasound probe 600 may enable a non-medical-professional user to use the ultrasound probe to obtain a three-dimensional ultrasound medical image by following them.
  • the region of interest may be input or selected by a user, for example using a user interface comprised by the probe, or a terminal or computing device in communication therewith.
  • positioning the ultrasound probe 600 comprises detecting that the ultrasound probe 600 is positioned on the region of interest of the subject 650, or on a target location thereof.
  • a user 640 may move the probe across the surface of the subject 650 until the probe 600 is detected to be on the region of interest and/or target location.
  • the probe 600 may be moved across the surface by sliding it over the surface, with its acoustic lens end 620 held against the surface, and/or with a longitudinal axis of the probe 100 extending substantially perpendicularly to the surface.
  • one or more initial ultrasound images are captured using the ultrasound probe 600 and whether the probe is correctly positioned or not may be determined based on the contents of such initial ultrasound images.
  • such initial ultrasound images may be captured continuously and/or at a pre-set frequency.
  • the system is configured to identify the presence, absence, and/or position of one or more features of the subject in each of the initial ultrasound images captured during positioning of the ultrasound image. For example, using an image recognition artificial intelligence or machine learning model to identify any instances of the one or more features in the initial images.
  • the system may be further configured to use the identity, position, and/or absence of the one or more identified features within an ultrasound image to determine/ whether the probe 600 was correctly positioned on the region of interest and/or a target location thereon when that ultrasound image was captured.
  • the system determines that the probe 600 is not correctly positioned when an ultrasound image is captured, or after a pre-set period has elapsed and/or a pre-set number of images have been captured (for example, following initialisation of the system, provision of instructions to the user, or from some other initial time) it may output an indication that that the probe is incorrectly positioned and/or instructions to, or how to, move the probe 600 to the correct position.
  • the actual location of the probe is determined from the one or more features, and the instructions may indicate a direction in which to move the probe 600 from the determined location to the correct position.
  • a confirmation or other indication may be provided to the user and/or the next stage 710, 715 of the method may be performed.
  • measurement steps 710, 715 of the method 700 after the ultrasound probe 600 has been positioned on the region of interest, a measurement phase is performed, during which a series of ultrasound images are captured 710 and orientation data is collected 715.
  • the measurement phase may be initiated when the system determines that the probe 100 has been correctly positioned as described above.
  • the measurement phase is initiated by a user, for example using one or more controls on the ultrasound probe or on a terminal or computing device in communication therewith.
  • Each captured 710 ultrasound image may be a two-dimensional planar image or “slice” in a plane corresponding to the orientation of the probe 600 while it was captured.
  • a user 640 manipulates the probe 600 to reorient it relative to the subject 650.
  • the user 640 preferably holds the acoustic lens end 620 of the acoustic probe 600 on the same location on the subject 650.
  • the probe may rotated in substantially a single direction, for example by fanning the probe as described above with reference to Fig. 6b. In other embodiments, the probe may be rotated in multiple different directions, thereby capturing images in a greater variety of different orientations.
  • the probe 600 may be rotated about an axis parallel to the patient’s midline. For example, as described above with reference to Fig. 6c.
  • a respective orientation in which the probe is arranged while said ultrasound image of the series is being captured is determined from measurements of the accelerometer and may be recorded.
  • the determined orientations may be relative to an absolute coordinate system, relative to each other, and/or relative to some reference orientation, such as an initial orientation when the first image of the series was captured, or when the probe was activated.
  • the respective orientation determined for each ultrasound image defines the plane in which each ultrasound image is located.
  • the captured images and their respective orientations may be stored in a memory.
  • ultrasound images of the series and their associated orientations are captured continuously, for example at a set frequency.
  • an ultrasound image may instead be captured whenever the orientation of the probe measured by the accelerometer 610 differs from any orientation in which a previous ultrasound image of the series was captured, for example, by more than a threshold amount.
  • each ultrasound image and/or its associated orientation may be timestamped.
  • a user may tilt or otherwise reorient the ultrasound probe, while holding it in contact with the region of interest.
  • instructions to reorient the probe 600, or a specific motion in which to reorient the probe 600 may be output by the system, for example, after the measurement phase is automatically initiated upon detection that the probe 600 has been correctly positioned as described above.
  • the measurement phase may continue for a fixed period of time, until a fixed number or range of orientations of ultrasound images have been captured, and/or until manually ended by a user, for example using one or more controls on the ultrasound probe 600 or on a terminal or computing device in communication therewith.
  • the series of captured ultrasound images are assembled 720 into a three-dimensional arrangement of images based on the recorded orientations in which they were captured.
  • Assembling the series of captured images into the arrangement may comprise selecting one of the ultrasound images as a reference image for the arrangement.
  • the selected reference image may be the image of a portion of the subject closest to one end or extreme of the subject, which may be determined based on the associated orientation of the probe 600 when it was captured.
  • the selected image is the most cranial (located closest towards the top of the patient’s head) and medial (located closest to the midline of the patient’s body, which extends from the top of the patient’s head towards their feet).
  • the remaining images of the series may arranged 720 relative to the selected image in the arrangement, based on the difference between their orientations (corresponding to the orientations of the probe 600 when they were captured) and the orientation of the selected reference image.
  • Fig. 8 shows a cross section of an example a three-dimensional image arrangement 800 of ultrasound images constructed after a measurement phase as described above is performed, in which the probe is rotated in a single direction about a single axis.
  • the arrangement comprises a reference image 810 which is arranged horizontally within the arrangement 800.
  • the selected reference image is the most cranial (located closest towards the top of the patient’s head) and medial (located closest to the midline of the patient’s body) image of the series.
  • the remaining images 820 are arranged below the reference image 810 relative thereto based on the differences in their orientations and the orientation of the reference image.
  • the example illustrated arrangement 800 comprises ultrasound images which are all separated by angles within a single plane, as captured by an ultrasound probe 600 only being rotated in a single direction during the measurement steps 710, 715.
  • the ultrasound probe may be rotated in multiple directions around multiple different axes, producing a more complex arrangement of ultrasound images.
  • Arrangements 800 of ultrasound images constructed using the method described herein can cover very large fields of view. In situations, such arrangements can have fields of view of greater than 180°, for example, where an ultrasound is pressed into a relatively soft or yielding portion of the subject.
  • a constructed arrangement 800 of ultrasound images may comprise differently sized gaps between adjacent images.
  • Fig. 7b shows additional steps 730, 740 that may be performed in some embodiments of a method 700 as described above, in order to prevent the image loosing quality or information as a result of excessively large such gaps.
  • any gaps between images 810, 820 within the arrangement 800 larger than a threshold separation may be identified 730.
  • Such gaps may define areas of the region of interest of the subject that have been insufficiently imaged during the initial measurement phase and of which additional images are to be captured.
  • the threshold separation may depend upon the region of interest and/or the subject that is being imaged.
  • the gaps between images 810, 820 within the arrangement may be one-dimensional angular separations between adjacent images, which may be compared to a threshold onedimensional angular separation, or may two-dimensional solid angles, which may be bounded by images and/or edges of the region of interest, which may be compared to a threshold two- dimensional solid angle.
  • the gaps are one-dimensional separations that can be compared to a one-dimensional threshold angle.
  • the illustrated example arrangement 800 includes one gap 830 with a separation between adjacent images exceeds the threshold. Such a gap results in a lack of information and/or detail on this portion of the imaged region of interest of the subject.
  • instructions to perform further imaging may be output 740 by the system, in a fourth stage of the method 700. Such further imaging may then be performed in an additional measurement phase.
  • the instructions output 740 when a gap 830 exceeding the threshold may instruct a user to initiate and/or perform an additional measurement phase.
  • the instructions may comprise a user interface element enabling a user to initiate an additional measurement phase.
  • identifying one or more gaps 830 exceeding the threshold may automatically initiate an additional measurement phase if and/or when the ultrasound probe 100 is arranged to perform another measurement phase.
  • the instructions output when one or more gaps 830 exceeding the threshold are identified 730 may simply instruct the user to perform additional measurements while moving the ultrasound probe or to repeat their previous movement with the ultrasound probe 600.
  • the instructions may instruct a user to reorient the probe 600 to and/or through one or more specific orientations corresponding to the threshold-exceeding gaps 830, in one or more specific directions, and/or at specific speeds.
  • the instructions may indicate one or more orientations in which to hold the ultrasound probe while performing the further imaging, the one or more orientation in which to hold the ultrasound prove corresponding to orientations of the one or more gaps.
  • the system may output an indication when images have been captured covering each and/or all of the identified gaps 830.
  • the instructions may be to rotate the probe about the same axis as it was rotated about in the initial measurement steps 710, 715.
  • the instructions may specify to rotate the probe in the opposite direction about the axis, and/or to rotate the probe more slowly than in the initial measurement steps 710, 715.
  • the system may be configured to identify one or more features of the subject in a plurality or all of the ultrasound images captured during the measurement phase, for example, using an artificial intelligence or machine learning model.
  • the identified features may be anatomical structures of an imaged patient’s body, such as individual organs and/or blood vessels, and/or parts thereof.
  • identifying features may comprise identifying parts of said images showing any of a preselected group of structures, which may correspond to the region of the subject being imaged.
  • identifying structures in the first and second images may comprise identifying geometric structures within the images, which may not be part of a preselected group.
  • An artificial intelligence or machine learning model with which the features are identified may be an artificial intelligence configured and/or trained to perform semantic segmentation.
  • the positions of features that have been identified in more than one of the images may be compared between those images to determine a lateral offset between those images.
  • Lateral motion herein refers to movement within the plane of the ultrasound image, such as in a direction towards or away from the ultrasound probe 600, or in a direction across the plane of the ultrasound image.
  • Such lateral offsets may be motion artefacts resulting from the subject moving during the capturing of the images. For example, when a patient whose abdomen is being imaged breathes in and out, this may displace a probe held against their skin inwards and outwards relative to some of their internal organs, and/or may displace one or more imaged internal organs within their body. For example a patient’s liver will move downwards within their body as they breathe in and will move upwards as they breathe out. This can result in organs or parts thereof being located in different positions within the patient’s body in different ultrasound images of the series.
  • the probe is arranged and pivoted such that its imaging plane remains parallel to the midline or longitudinal axis of the patient’s body, for example, as shown in Fig. 6c. In such embodiments, movement of organs upwards, downwards, towards the probe, and/or away from the probe between images of the series will be visible.
  • images that have been laterally offset with respect to each other may be translated within the three-dimensional arrangement to remove such lateral offsets.
  • the three-dimensional arrangement may be, or may be reformatted into, a three-dimensional image format comprising an array of voxels. Such a format may facilitate isolation and/or analysis of individual parts of the subject, which may be represented by groups or clusters of voxels within the array. Alternatively, the three- dimensional arrangement may be reformatted into a three dimensional array or a point cloud.
  • a plurality of three-dimensional ultrasound images may be generated by applying the techniques described above with respect to Figs. 6 to 8.
  • the resulting three-dimensional images may be comprise a first image and a second image that are combined using the techniques described above with respect to Figs. 1 to 5.
  • Implementations of the subject matter and the operations described in this specification can be realized in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Implementations of the subject matter described in this specification can be realized using one or more computer programs, i.e. , one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
  • a method of imaging a subject comprising: capturing a series of ultrasound images using an ultrasound probe; determining a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured, each orientation being derived from measurements of an accelerometer comprised by the ultrasound probe; and arranging each of the series of captured ultrasound images in a three- dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured.
  • a method further comprising: determining whether any gaps between the arranged ultrasound images exceed a threshold separation; and when one or more gaps between the arranged ultrasound images exceed the threshold separation are identified, outputting instructions to perform further imaging.
  • instructions to perform further imaging comprise instructions indicating one or more orientations in which to position the ultrasound probe while performing the further imaging, the one or more orientations in which to position the ultrasound prove corresponding to orientations of the one or more gaps.
  • a method according to any of paragraphs 1.1 to 1.4 further comprising: positioning the ultrasound probe on a region of interest of the subject before capturing the series of ultrasound images; and reorienting the ultrasound probe while holding it on the region of interest while the series of ultrasound images are captured.
  • positioning the ultrasound probe on the region of interest comprises positioning the probe on the subject, capturing an initial ultrasound image using the ultrasound probe, identifying one or more features of the subject in the initial image, and using the one or more identified features to determine whether the probe is correctly positioned on the region of interest.
  • a method according to paragraph 1.6 the method comprising, when the probe is determined not to be correctly positioned, outputting instructions to reposition the ultrasound probe.
  • arranging the series of captured ultrasound images in the three-dimensional arrangement comprises selecting one of the images as a reference image and arranging the remaining images relative to the reference image in the arrangement, based on the differences in the orientations in which they were captured and the orientation in which the reference image was captured.
  • a method further comprising: identifying a feature of the subject in a plurality of the captured ultrasound images, comparing the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images, and translating one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
  • a system for imaging a subject comprising: an ultrasound probe configured to capture a series of ultrasound images; an accelerometer, comprised by the ultrasound probe; an output device for providing instructions to a user; and a processor, the processor configured to: use measurements of the accelerometer to derive a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured; arrange each of the series of captured ultrasound images in a three- dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured.
  • the processor further configured to: determine whether any gaps between the arranged ultrasound images exceed a threshold separation; and when one or more gaps between the arranged ultrasound images exceed the threshold separation are identified, outputting instructions to perform further imaging using the output device.
  • instructions to perform further imaging comprise instructions indicating one or more orientations in which to position the ultrasound probe while performing the further imaging, the one or more orientations in which to position the ultrasound prove corresponding to orientations of the one or more gaps.
  • arranging the series of captured ultrasound images in the three-dimensional arrangement comprises selecting one of the images as a reference image and arranging the remaining images relative to the reference image in the arrangement, based on the differences in the orientations in which they were captured and the orientation in which the reference image was captured.
  • a system according to any of paragraphs 1.11 to 1.16, wherein the system is further configured to: identify a feature of the subject in a plurality of the captured ultrasound images; compare the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images, and translate one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
  • a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to any of paragraphs 1.1 to 1.10.
  • a method of combining first and second three-dimensional images of a subject comprising: identifying one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
  • identifying the one or more shared structures of the subject shown in both the first and second images comprises identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
  • identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures.
  • a system for combining first and second three-dimensional images of a subject configured to: identify one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
  • a system configured to identify the one or more shared structures of the subject shown in both the first and second images by identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
  • a system configured to identify structures within each of the first and second three dimensional images using a three- dimensional segmentation artificial intelligence.
  • a system configured to determine the transformation using an artificial intelligence model configured to receive the three- dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.
  • a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to any of paragraphs 2.1 to 2.10.

Abstract

A method of combining first and second three dimensional images of a subject. One or more shared structures shown in both of the image are identified by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three- dimensional images. A transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image is determined and applied and the first image and the second image are combined into a composite image. The three-dimensional images may each be obtained by arranging a series of captured ultrasound images in a three- dimensional arrangement based on an orientation of the ultrasound probe determined based on measurements of an accelerometer.

Description

System and Method for Three-dimensional Imaging
FIELD
[0001] Embodiments described herein relate to methods and systems for three-dimensionally imaging a subject.
BACKGROUND
[0002] Many three-dimensional imaging techniques, such as three-dimensional medical ultrasound imaging, are only suitable for obtaining images of limited portions of a subject, such as a patient’s body. In some situations it may be necessary to analyse portions of the subject which extend across multiple such three-dimensional images, which can introduce difficulties in such analysis. One approach to this is to use a computer vision algorithm, such as scaleinvariant feature transform (SIFT), to match unique points in three-dimensional images.
[0003] Three dimensional ultrasound is a medical imaging technique which is often used in foetal, cardia or intra-vascular applications. Existing three-dimensional ultrasound imaging techniques involve controlling the direction of a sequence of ultrasound images using a beam steering means or a motorised gimbal that is integrated with an ultrasound probe. Such sophisticated methods may be expensive, cover only limited angular ranges, and require a specially trained medical professional to perform them, limiting their accessibility.
[0004] An aim of the present invention is to provide means for improving three-dimensional imaging techniques.
SUMMARY OF THE INVENTION
[0005] A first aspect provides a method for combining a plurality of three-dimensional images (e.g. ultrasound images) of a subject into a single three dimensional image. Such a method enables a composite image to be constructed from first and second three-dimensional images of a subject that show one or more of the same features, without requiring the relative arrangement of the first and second three dimensional images to be known.
[0006] According to the first aspect, there is provided a method of combining first and second three-dimensional images of a subject, the method comprising: identifying one or more shared structures of the subject shown in both of the first and second images by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three-dimensional images; determining a transformation mapping a three- dimensional representation of the one or more shared structures in the first image onto a three- dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
[0007] By using artificial intelligence to identify structures (e.g., liver, aorta, the posterior superior ridge), a high level of accuracy in determining the transformation may be achieved whilst requiring less overlap between the first and the second images as compared to prior art techniques that rely on rule-based algorithms to identify unique points within an image.
[0008] The artificial intelligence comprises a machine learning model, which may receive each of the first and second images as inputs and identify the structures within each of the first and second images. Using the artificial intelligence comprises providing each of the first and second images as inputs to the machine learning model. The structures identified in the images may be three-dimensional structures identified in each of the images. The structures may be structures identified two-dimensional slices, where sets of those two-dimensional slices each form a respective one of the three-dimensional images.
[0009] According to a second aspect, there is provided a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according the first aspect.
[0010] According to a third aspect, there is provided a system for combining first and second three-dimensional images of a subject, the system configured to: identify one or more shared structures of the subject shown in both of the first and second images by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three-dimensional images; determine a transformation mapping a three- dimensional representation of the one or more shared structures in the first image onto a three- dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
[0011] In some embodiments, the system configured to: identify using artificial intelligence a preselected group of structures corresponding to a region of the subject imaged in the first and second images.
[0012] In some embodiments, identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures. [0013] In some embodiments, identifying the structures using artificial intelligence comprises using artificial intelligence to identify the structures within each of a set of two-dimensional slices defining the respective three-dimensional image.
[0014] In some embodiments, the artificial intelligence is a machine learning model trained using a training data set comprising three-dimensional images.
[0015] In some embodiments, the artificial intelligence is a three-dimensional segmentation artificial intelligence.
[0016] In some embodiments, the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
[0017] In some embodiments, the system is configured to: identify the one or more shared structures of the subject shown in both the first and second images by identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
[0018] In some embodiments, the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
[0019] In some embodiments, the first and second point clouds are derived from segmented- out copies of the one or more shared structures in the first and second images respectively.
[0020] In some embodiments, the first and second point clouds are derived from segmented- out copies of the one or more shared structures in the first and second images respectively that are output by the artificial intelligence.
[0021] In some embodiments, the system is configured to determine the transformation using a further artificial intelligence model configured to receive the three-dimensional representation of the one or more shared structures in the first image and the three- dimensional representation of the one or more shared structures in the second image as inputs. [0022] A fourth aspect provides a method for performing ultrasound imaging of a subject to obtain a three-dimensional image in which artefacts, e.g. motion artefacts resulting from the subject moving during the capturing of the images, are compensated for. Such a method may enable a three-dimensional representation of the subject to be captured by arranging the three-dimensional images in their associated orientations and compensating for lateral offsets between one or more of the images. By varying the orientation of the ultrasound probe a larger amount as the series of images are captured, a three-dimensional representation with a larger field of view may be obtained.
[0023] According to the fourth aspect, there is provided a method of imaging a subject, the method comprising: capturing a series of ultrasound images using an ultrasound probe; determining a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured, each orientation being derived from measurements of an accelerometer comprised by the ultrasound probe; arranging each of the series of captured ultrasound images in a three-dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured; identifying a feature of the subject in a plurality of the captured ultrasound images; comparing the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images; and translating one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
[0024] In some embodiments, the method comprises capturing a series of ultrasound images using an ultrasound probe whilst the ultrasound probe is rotated within a plane substantially perpendicular to the probe’s imaging plane throughout the motion.
[0025] According to a fifth aspect, there is provided a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to the fourth aspect.
[0026] According to a sixth aspect, there is provided a system for imaging a subject, the system comprising: an ultrasound probe configured to capture a series of ultrasound images; an accelerometer, comprised by the ultrasound probe; an output device for providing instructions to a user; and a processor, the processor configured to: use measurements of the accelerometer to derive a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured; arrange each of the series of captured ultrasound images in a three-dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured; identify a feature of the subject in a plurality of the captured ultrasound images; compare the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images; and translate one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
[0027] In some embodiments, the system comprises capturing a series of ultrasound images using an ultrasound probe whilst the ultrasound probe is rotated within a plane substantially perpendicular to the probe’s imaging plane throughout the motion.
BRIEF DESCRIPTION OF THE FIGURES
[0028] Fig. 1 is a flowchart of an embodiment of a method of combining three dimensional images;
[0029] Fig. 2a is a three-dimensional image of a left lobe of a patient’s liver;
[0030] Fig. 2b is a three-dimensional image of a right lobe of the patient’s liver shown in Fig. 2a;
[0031] Fig. 3a shows a three-dimensional representation of a gallbladder extracted from the image of Fig. 2a;
[0032] Fig. 3b shows a three-dimensional representation of a gallbladder extracted from the image of Fig. 2b;
[0033] Fig. 4 shows a combination of the images of Figs. 2a and 2b using a transformation based on a transformation generated from the extracted gallbladder representations of Figs. 3a and 3b;
[0034] Fig. 5 shows a illustrates a method as described herein in which two three-dimensional anatomical images, or “voxels” are combined to produce a larger contiguous anatomy;
[0035] Fig. 6a shows an ultrasound probe for use in imaging a subject;
[0036] Fig. 6b shows a user holding the ultrasound probe of Fig. 6a against a subject and rotating it through a range of orientations;
[0037] Fig. 6c is a diagram showing a range of motion through which the ultrasound probe of Fig. 1 may be moved in use;
[0038] Fig. 7a is a flowchart of an embodiment of a method for imaging a subject; [0039] Fig. 7b shows additional optional steps of the method of Fig. 2a; and
[0040] Fig. 8 shows a cross sectional view of a three-dimensional arrangement.
DETAILED DESCRIPTION
[0041] Referring to figures 1 to 5 generally, embodiments of methods and systems for combining multiple three-dimensional images of a subject, in which structures within first and second images are identified, and used to determine a transformation for mapping one of the images onto the other.
[0042] Such methods and systems may be used to combine overlapping three dimensional images of a subject, such as three-dimensional ultrasound images of adjacent parts of a patient’s body into a single combined image, without requiring the images to be captured in a known or fixed orientation.
[0043] The three dimensional images may each be or comprise a three dimensional array; a voxel based three-dimensional image, a point cloud, an array of two-dimensional tomographic images or slices in a fixed arrangement, or any other suitable three-dimensional image format. For example, the image may comprise a series of two-dimensional tomographic images or slices captured as an imaging device is moved and/or rotated relative to a subject. Individual tomographic images or slices may have associated labels or timestamps associated therewith which may indicate their arrangement within the three-dimensional image.
[0044] The image may be in a file format such as . nifti , .npy, or .ply, which are widely used in three-dimensional medical imaging. In some embodiments, the method may comprise transforming one or both of the images into such a format.
[0045] The three dimensional images may be medical images and the subject may be a patient’s body, such as a human patient’s body, wherein the methods or systems are medical imaging methods or systems. However, it will be appreciated that in alternative embodiments, the subject may be a non-human animal body or an inanimate device or object. The two three- dimensional images may be of different, but at least partially overlapping regions of the subject.
[0046] The three dimensional images may be magnetic resonance imaging (MRI) images, computerized tomography (CT) scan images, ultrasound images, positron emission tomography (PET) images, or any other suitable form of three-dimensional images. [0047] In some embodiments, one or both the three dimensional images may comprise associated metadata, for example, indicating a portion of a patient shown in the subject, and/or an orientation of the three-dimensional images relative to the subject.
[0048] Fig. 1 shows a flowchart 100 of a method of an embodiment of a method of combining two three-dimensional images of a subject.
[0049] In a first step 110 of the method, one or more structures of the subject shown in both of the two images are identified. The structures may be anatomical structures of an imaged body, such as individual organs, blood vessels and/or components thereof.
[0050] Identifying the shared structures identifies parts of the subject that are at least partially shown in both images, enabling the different positions and orientations of these structures in the two images to be used to determine the relative positions and orientations of the portions of the subject shown in the two images.
[0051] Identifying the structures shown in both of the images may comprise identifying a first set of structures shown in the first image and a second set of structures shown in the second image. The first and second sets may then be compared to identify those structures shown in both the first and second images. Identifying structures in each of the first and second images may comprise detecting and labelling the detected structures.
[0052] In some embodiments, identifying structures in the first and second images may comprise identifying parts of said images showing any of a preselected group of structures. The preselected group of structures may correspond to the region of the subject being imaged, and different preselected groups of structures may be used for images of different parts of the subject. For example, when identifying structures in an image of a patient’s abdomen the preselected group of structures may comprise the liver, the inferior vena cava (I VC) , the aorta, the hepatic veins, the portal veins, and/or the kidney.
[0053] Alternatively or additionally, identifying structures in the first and second images may comprise identifying geometric structures within the images, which may not be part of a preselected group.
[0054] The structures may be identified within the two three-dimensional images using an artificial intelligence system configured to identify structures of the subject (such as structures of a preselected group as described above) within three-dimensional images, such as a machine learning model trained using a data set comprising a plurality of three-dimensional images as described herein. [0055] In some embodiments, the structure identifying artificial intelligence is an artificial intelligence configured and/or trained to perform semantic segmentation of three-dimensional images, such as a ll-Net or V-Net convolutional neural network or variant thereof. The artificial intelligence may receive a three-dimensional image as an input and may output the identity and location within the image of each structure identified therein. In some embodiments, the structure-identifying artificial intelligence may be configured to output a segmented-out copy of one, some or each of the structures identified in the image. The segmented-out copy of a structure comprising the portion of the input three-dimensional image occupied by that structure, with the remainder of the image substantially removed.
[0056] Three-dimensional images may be input to the artificial intelligence in a three- dimensional array format such as an .npy file. In such embodiments, the method may comprise converting the three-dimensional images into such a three-dimensional array format, such as an .npy file, if they are not already in such a format. Any segmented out copies of identified structures may also be output in such a three-dimensional array format.
[0057] The structure-identifying artificial intelligence may be a machine learning model trained using a training data set comprising three-dimensional images, such as three-dimensional medical images, and associated identities and locations of a set of structures in each image. In some embodiments, the training data may further comprise segmented out copies of one, some, or each of the set of structures in each image.
[0058] In some embodiments, the structure-identifying artificial intelligence is configured and/or trained to identify structures in medical images of a specific part of a patient’s body. A plurality of such structure-identifying artificial intelligences may be configured and/or trained to identify structures in images of different parts of a patient’s body, and the structures in the first and second images may be identified using artificial intelligences corresponding to the parts of a patient’s body shown therein.
[0059] Alternatively, or additionally, the structures may be identified in each image of an array of two-dimensional tomographic images or slices defining a three-dimensional image, using an artificial intelligence or machine learning system configured to identify structures of the subject (such as structures of a preselected group as described above) within two-dimensional images or slices, for example, an artificial intelligence configured and/or trained to perform semantic segmentation of two-dimensional images. In such embodiments, the structures identified in a three-dimensional image may be all those identified in the two-dimensional images or slices defining said three-dimensional image, and the location of an identified structure within the three-dimensional image may be defined by the location of portions of that structure shown in one or more of the two-dimensional images or slices.
[0060] Alternatively, or additionally, structures may be identified within the two images based on boundaries between regions within the three-dimensional images.
[0061] After the structures of the subject shown in each of the two images are identified, one or more shared structures that have been identified in both the first and second images is determined. In some embodiments, the one or more shared structures may be all of the structures that have been identified in both of the two images. In alternative embodiments, the one or more shared structures may be a subset of the structures that have been identified in both of the two images.
[0062] In a second step 120 of the method 100, a three-dimensional representation of the one or more shared structures in the first image and a three-dimensional representation of the one or more shared images in the second image are used to determine a transformation for mapping the shared structures in one of the images onto the identified structures in the other of the images. In embodiments such as the illustrated example, the transformation is for mapping the shared structures in the first image onto the shared structures in the second image.
[0063] In some embodiments, the one or more shared structures may be a plurality of shared structures (or a plurality of shared structures whenever more than one structure has been identified in both of the images), which may improve the precision and/or accuracy with which the mapping is determined.
[0064] The three-dimensional representation of the shared structures in each image may represent all of the determined shared structures in that image, in the locations that they were identified within that image. The three-dimensional representations may exclude portions of the images other than the shared structures. This may provide sparsity into the representations, which may facilitate determining the mapping from one image to the other.
[0065] In some embodiments, the three-dimensional representations of the shared structures in the two images may be, or may be derived from, segmented out copies of those structures identified in the image. The segmented-out copies of the structures comprising the portions of the input three-dimensional image occupied by those structures, substantially excluding elements other than the structures. Such segmented out copies of the structures are kept spatially congruent to their positions in the original images. The segmented-out copies of the shared structures may be output by a structure identifying artificial intelligence as described above. Segmenting out the shared structures in this manner may introduce sparsity into the images and the representations of the shared structures therein as described above.
[0066] In some embodiments, the three-dimensional representation of the one or more shared structures within an image may be derived from individual segmented-out copies of each of the one or more shared structures in that image, for example, as output by the structureidentifying artificial intelligence when each structure is identified. In other embodiments, after the one or more shared structures are determined, a segmented-out copy of all of the one or more shared structures may be derived from each image, and may define or be used to derive the three-dimensional representation of the one or more shared structures.
[0067] The three dimensional representations of the one or more shared structures in the first and second images may be first and second point clouds respectively.
[0068] If the segmented-out copies of the one or more shared structures are already in a point cloud format, such point cloud representations may be used in this second step. If they are not already in a point cloud format, for example if they are in a three-dimensional array format such as a .npy file as described above, the method may comprise transforming them into a point cloud representation.
[0069] Figs 2a and 2b show examples of two overlapping three-dimensional medical images 200, 250 of portions of a patient’s liver. Fig. 2a shows the left lobe of the liver and fig. 2b shows the right lobe of the liver. Both of the two three-dimensional images include the patient’s gallbladder.
[0070] Figs 3a and 3b each show front and rear views of an extracted segmented-out copies of a gallbladder 300, 350 that has been identified in each of the two images 200, 250 and extracted therefrom. The segmented out gallbladders 300, 350 are three-dimensional representations of the gallbladders identified in the two images. The segmented out gallbladders 300, 350 retain the locations in which they are located in the original images 200, 250, such that a mapping or transformation of one segmented out gallbladder 300, 350 to the other will also accurately map the two images 200, 250 onto each other.
[0071] The first and second three-dimensional representations of the one or more shared structures may be down-sampled before being used to determine the transformation, preferably in the same manner. Point cloud representations may be downsampled to between 5,000 and 10,000 points, for example, from multi-million-point representation obtained from the images. [0072] The transformation may be determined using an artificial intelligence and/or machine learning model, which may be configured to receive the two three-dimensional representations of the identified structures in the two images, such as point cloud representations, as inputs and to output the transformation for mapping one three-dimensional representation onto the other as an output. The transformation-determining artificial intelligence and/or machine learning model may be a point cloud registration artificial intelligence, such as a PointNet neural network or variant thereof. The transformation for mapping one point cloud onto the other may be a transformation matrix.
[0073] The transformation-determining artificial intelligence may be a machine learning model trained using a training data set comprising three-dimensional representations of objects in two different locations, such as point cloud representations, which may comprise representations of anatomical features, for example from performing steps 110, 120 of the method as described above, and/or generic non-anatomical structures, such as artificially created three-dimensional representations of abstract three-dimensional shapes. In some embodiments, training the transformation-determining machine learning model may comprise initially training the model using three-dimensional representations, such as point cloud representations, of generic non-anatomical structures and subsequently training the model using three-dimensional representations of anatomical structures. This training approach may quickly train the model to perform transformations before fine-tuning it for use on anatomical structures as in the present invention.
[0074] In a third step 130 of the method 100, the determined transformation is used to transform one of the two images and the transformed image is combined with the other image to form a composite image. In embodiments such as the illustrated example, the second image is transformed and is then combined with the first image to form a composite image.
[0075] Transforming one of the images using the determined transformation effectively results in that image having the same coordinate system as the other image. The two images may then be combined, for example by overlaying them to result in a composite image. The combined image provides a contiguous three-dimensional image covering the portions of the subject shown in both of the two objects. The combined image may be subsequently saved or output in any suitable format.
[0076] The two images may be combined in three dimensional array formats, such as .npy files. Combining the two images may comprise may comprise defining an array and adding the two images to the array. The array may be defined with a size sufficient to contain both three dimensional images in their entirety. At points where the two combined images overlap, values of one the two images may be selected, or the values the two images may be averaged, in some embodiments with a weighted average.
[0077] Fig. 4 shows a composite image produced by combining the three-dimensional images shown in Figs 2a and 2b after the image of the right lobe of liver shown in Fig. 2b has been transformed using a transformation determined from point cloud representations off the extracted gallbladders shown in Figs. 3a and 3b.
[0078] In some embodiments, the method may comprise further combining the composite image with another image in the same manner as described above. This may be performed by repeating the steps of the method with the composite image being one of the two images for said repeated steps. Such a process may be repeated any number of times to combine a range of images of different parts of the subject.
[0079] Fig. 5 is a flowchart of specific embodiment of a method 500 as described herein in which two three-dimensional anatomical images, or “voxels” are combined to produce a larger contiguous anatomy. The method 500 comprises segmenting out and listing 510 anatomical structures of shown in each of the two images, and then comparing 520 the list of anatomical structures to shortlist those anatomical features present in both images. After this shortlist is created, a point cloud of the shortlisted anatomical structures is created 530 for each of the two images, and a point cloud registration neural network 540 is used to generate a transformation matrix from the point clouds. The transformation matrix is then applied 550 to the second of the three-dimensional images to align it with the first three-dimensional image and the aligned images are then combined. If more than two images are provided, the method may be repeated as needed 560 until all of the images have been combined.
[0080] Referring to figures 6 to 8 generally, embodiments of methods and systems for three- dimensionally imaging a subject are described, in which an ultrasound probe comprising an accelerometer captures both ultrasound images and orientation data, from which a three- dimensional image is constructed.
[0081] In such embodiments, an ultrasound probe comprises at least one accelerometer, configured to detect changes in the orientation of the probe. The accelerometer may be arranged adjacent an end of the probe at which ultrasonic acoustic waves are emitted and detected, for example proximate to, or adjacent, an acoustic lens of the probe.
[0082] In example embodiments described herein, the accelerometer is configured to detect changes in orientations of the probe in three or more dimensions, for example, the accelerometer may be a three-degrees of freedom (3DoF) accelerometer. The accelerometer may be configured to record orientations of the probe, or changes therein, as sets of associated pitch, yaw and roll angles thereof.
[0083] In alternative embodiments, the accelerometer may be configured to detect changes in orientations of the probe in fewer dimensions, for example, in only a single dimension. For example, in some embodiments the accelerometer may be configured to detect changes about only a single axis of the probe, in which case a user may only tilt the probe about such an axis in use. The accelerometer is preferably configured to detect rotation of the probe about at least a short transverse axis of the probe parallel to an imaging plane thereof.
[0084] Existing methods of 3D imaging a subject frequently require more sophisticated accelerometers with more than three degrees of freedom, such as six-degrees of freedom accelerometers that detect changes in their position as well changes in their orientation. However embodiments described herein allow changes in position of the accelerometer and associated ultrasound probe to be determined from features in captured ultrasound images and fewer-degree-of-freedom orientation changes, without requiring such accelerometers, which may be more expensive, complex and/or bulky.
[0085] Fig. 6a shows an example of an ultrasound probe 600 for use in embodiments described herein. The illustrated probe is a curvilinear phased array ultrasound probe, however it will be appreciated that any other ultrasound probe may be used, such as a linear ultrasound probe. The probe is configured to be held and manipulated by a user, and/or by other suitable machinery, enabling the orientation of the probe to be varied in use, for example while held in contact with a subject. In use, the probe captures ultrasound images in orientations corresponding to the orientation in which it is held.
[0086] The illustrated probe 600 comprises an accelerometer 610 arranged adjacent a distal end, or “nose end”, 620 of the probe 600, at which the acoustic lens of the probe 600 is located.
[0087] Such an ultrasound probe 600 defines or is comprised by a system configured to three- dimensionally image a subject. The system may further comprise a computing device, such as a terminal, which may be configured to control the ultrasound probe and/or to view, output or manipulate images captured by the probe, or composites thereof. The system may be configured to perform an imaging method as described herein, for example by executing computer instructions stored in one or more storage media comprised by or associated with the system using one or more processors comprised by the system, for example, a non- transitory storage media such as a memory comprised by the computing device. [0088] In some embodiments, the ultrasound probe 600 and/or the computing device of the system is configured to output instructions and/or other information for a user. Such instructions and/or information may comprise visual, audio, haptic, and/or pictographic elements. The instructions may be output directly by the system, for example using an electronic visual display, graphical user interface (GUI), speaker or other output device comprised by, or connected to, the ultrasound probe and/or computing device, or may be output to another computing device in communication with the system, such as a personal computer or smartphone, in order to be output to a user by said other computing device.
[0089] Fig. 6b shows an example of the probe 600 being used to image a subject 650. A user 640 holds the probe, such that its acoustic lens end 620 is pressed against or into the exterior of the subject 650, such as a portion of their own body. In this position, an imaging plane 630 in which the probe 600 captures ultrasound images extends into the interior of the subject 650.
[0090] In use, the user holds the acoustic lens end 620 of the probe 600 in the same position on the subject 650 while tilting the probe 600 about an axis 660 parallel to the imaging plane 630 and tangential to the surface of the subject 650, such that tilting the probe 630 sweeps the imaging plane through the subject 650. In the illustrated example, this axis 660 is a lateral axis of the probe 600, perpendicular to the length of its handle. The reorientation of the probe 600 and the imaging plane 630 as the probe is tilted about the axis 660 are shown with doubleheaded arrows. Tilting an ultrasound probe 600 about lateral axis 660 of the probe 600 in this manner is sometimes referred to as “fanning” the probe 600. As the probe 600 is tilted, the accelerometer 620 measures the changes in orientation of the probe 600 (and by extension, of the imaging plane 630).
[0091] Fig. 6c shows an example of the probe 600 being used to image the abdomen of a human patient 650. As in the illustrated example, the probe 600 may be arranged such that its imaging plane 630 is parallel to a longitudinal axis or midline of the body 650, and the probe 600 may be rotated about an axis 660 substantially parallel to the longitudinal axis or midline, such that all of the series of images are substantially parallel to said longitudinal axis or midline, and such that the ultrasound probe is rotated within a transverse plane 670 substantially perpendicular to the probe’s imaging plane throughout the motion. Capturing a series of ultrasound images in this arrangement may advantageously facilitate identifying, and compensating for, motion of organs of the patient 650 relative to the probe 600 within such planes, for example as a consequence of the patient’s breathing or heartbeat. In other examples, the ultrasound probe may be rotated within other planes such as sagittal or coronal planes. [0092] Fig. 7a is a flowchart 700 of a method of imaging a subject 750. In some embodiments, the subject 750 is a patient’s body, such as a human patient’s body, and the methods or systems are medical imaging methods or systems. However it will be appreciated that in alternative embodiments, the subject may be a non-human animal body or an inanimate device or object.
[0093] During measurement steps 710, 715, of the method 700, an ultrasound probe 700 comprising an accelerometer for detecting changes in its orientation is located on a region of interest on the subject 750. In some embodiments, the probe 600 is located in a target location on the region of interest, for example near a centre thereof. In some embodiments, the method 700 may comprise positioning the ultrasound probe 600 on the region of interest before the measurement steps 710, 715 are performed.
[0094] In some embodiments, instructions on where to position the probe 600 on and/orwithin the region of interest are output by the system. For example, in image of a probe 600 in position may be presented to the user. Providing instructions on positioning the ultrasound probe 600 may enable a non-medical-professional user to use the ultrasound probe to obtain a three-dimensional ultrasound medical image by following them. The region of interest may be input or selected by a user, for example using a user interface comprised by the probe, or a terminal or computing device in communication therewith.
[0095] In some embodiments, positioning the ultrasound probe 600 comprises detecting that the ultrasound probe 600 is positioned on the region of interest of the subject 650, or on a target location thereof. For example, a user 640 may move the probe across the surface of the subject 650 until the probe 600 is detected to be on the region of interest and/or target location. The probe 600 may be moved across the surface by sliding it over the surface, with its acoustic lens end 620 held against the surface, and/or with a longitudinal axis of the probe 100 extending substantially perpendicularly to the surface.
[0096] In some such embodiments, as the probe 600 is in the process of being positioned on the region of interest, one or more initial ultrasound images are captured using the ultrasound probe 600 and whether the probe is correctly positioned or not may be determined based on the contents of such initial ultrasound images. In some embodiments, such initial ultrasound images may be captured continuously and/or at a pre-set frequency.
[0097] In some embodiments, the system is configured to identify the presence, absence, and/or position of one or more features of the subject in each of the initial ultrasound images captured during positioning of the ultrasound image. For example, using an image recognition artificial intelligence or machine learning model to identify any instances of the one or more features in the initial images. The system may be further configured to use the identity, position, and/or absence of the one or more identified features within an ultrasound image to determine/ whether the probe 600 was correctly positioned on the region of interest and/or a target location thereon when that ultrasound image was captured.
[0098] When the system determines that the probe 600 is not correctly positioned when an ultrasound image is captured, or after a pre-set period has elapsed and/or a pre-set number of images have been captured (for example, following initialisation of the system, provision of instructions to the user, or from some other initial time) it may output an indication that that the probe is incorrectly positioned and/or instructions to, or how to, move the probe 600 to the correct position. In some embodiments, the actual location of the probe is determined from the one or more features, and the instructions may indicate a direction in which to move the probe 600 from the determined location to the correct position. When the system determines that the probe 600 is correctly positioned, or that the probe has been correctly positioned for a pre-set time period and/or number of images, a confirmation or other indication may be provided to the user and/or the next stage 710, 715 of the method may be performed.
[0099] In measurement steps 710, 715 of the method 700, after the ultrasound probe 600 has been positioned on the region of interest, a measurement phase is performed, during which a series of ultrasound images are captured 710 and orientation data is collected 715.
[0100] The measurement phase may be initiated when the system determines that the probe 100 has been correctly positioned as described above. In alternative embodiments, the measurement phase is initiated by a user, for example using one or more controls on the ultrasound probe or on a terminal or computing device in communication therewith.
[0101] Each captured 710 ultrasound image may be a two-dimensional planar image or “slice” in a plane corresponding to the orientation of the probe 600 while it was captured. As the series of ultrasound images are captured 710, a user 640 manipulates the probe 600 to reorient it relative to the subject 650. When reorienting the probe 600, the user 640 preferably holds the acoustic lens end 620 of the acoustic probe 600 on the same location on the subject 650.
[0102] In some embodiments, the probe may rotated in substantially a single direction, for example by fanning the probe as described above with reference to Fig. 6b. In other embodiments, the probe may be rotated in multiple different directions, thereby capturing images in a greater variety of different orientations. [0103] In some embodiments in which the subject 650 is a patient’s body, the probe 600 may be rotated about an axis parallel to the patient’s midline. For example, as described above with reference to Fig. 6c.
[0104] When each of the series of ultrasound images is captured 710, a respective orientation in which the probe is arranged while said ultrasound image of the series is being captured is determined from measurements of the accelerometer and may be recorded. The determined orientations may be relative to an absolute coordinate system, relative to each other, and/or relative to some reference orientation, such as an initial orientation when the first image of the series was captured, or when the probe was activated. The respective orientation determined for each ultrasound image defines the plane in which each ultrasound image is located. The captured images and their respective orientations may be stored in a memory.
[0105] In some embodiments, ultrasound images of the series and their associated orientations are captured continuously, for example at a set frequency. In alternative embodiments, an ultrasound image may instead be captured whenever the orientation of the probe measured by the accelerometer 610 differs from any orientation in which a previous ultrasound image of the series was captured, for example, by more than a threshold amount. In some embodiments, each ultrasound image and/or its associated orientation may be timestamped.
[0106] During the measurement phase, a user may tilt or otherwise reorient the ultrasound probe, while holding it in contact with the region of interest. In some embodiments, instructions to reorient the probe 600, or a specific motion in which to reorient the probe 600 may be output by the system, for example, after the measurement phase is automatically initiated upon detection that the probe 600 has been correctly positioned as described above.
[0107] The measurement phase may continue for a fixed period of time, until a fixed number or range of orientations of ultrasound images have been captured, and/or until manually ended by a user, for example using one or more controls on the ultrasound probe 600 or on a terminal or computing device in communication therewith.
[0108] After the measurement phase 710, 715, in a second stage of the method 700, the series of captured ultrasound images are assembled 720 into a three-dimensional arrangement of images based on the recorded orientations in which they were captured.
[0109] Assembling the series of captured images into the arrangement may comprise selecting one of the ultrasound images as a reference image for the arrangement. The selected reference image may be the image of a portion of the subject closest to one end or extreme of the subject, which may be determined based on the associated orientation of the probe 600 when it was captured. For example, in some embodiments in which the images are of a portion of a patient’s body, the selected image is the most cranial (located closest towards the top of the patient’s head) and medial (located closest to the midline of the patient’s body, which extends from the top of the patient’s head towards their feet).
[0110] After an image is selected as a reference image, the remaining images of the series may arranged 720 relative to the selected image in the arrangement, based on the difference between their orientations (corresponding to the orientations of the probe 600 when they were captured) and the orientation of the selected reference image.
[0111] Fig. 8 shows a cross section of an example a three-dimensional image arrangement 800 of ultrasound images constructed after a measurement phase as described above is performed, in which the probe is rotated in a single direction about a single axis. The arrangement comprises a reference image 810 which is arranged horizontally within the arrangement 800. The selected reference image is the most cranial (located closest towards the top of the patient’s head) and medial (located closest to the midline of the patient’s body) image of the series. The remaining images 820 are arranged below the reference image 810 relative thereto based on the differences in their orientations and the orientation of the reference image.
[0112] The example illustrated arrangement 800 comprises ultrasound images which are all separated by angles within a single plane, as captured by an ultrasound probe 600 only being rotated in a single direction during the measurement steps 710, 715. In other examples, the ultrasound probe may be rotated in multiple directions around multiple different axes, producing a more complex arrangement of ultrasound images.
[0113] Arrangements 800 of ultrasound images constructed using the method described herein can cover very large fields of view. In situations, such arrangements can have fields of view of greater than 180°, for example, where an ultrasound is pressed into a relatively soft or yielding portion of the subject.
[0114] As shown in Fig. 8, a constructed arrangement 800 of ultrasound images may comprise differently sized gaps between adjacent images. Fig. 7b shows additional steps 730, 740 that may be performed in some embodiments of a method 700 as described above, in order to prevent the image loosing quality or information as a result of excessively large such gaps. [0115] After the arrangement is constructed 720, in a third stage of the method 700 any gaps between images 810, 820 within the arrangement 800 larger than a threshold separation may be identified 730. Such gaps may define areas of the region of interest of the subject that have been insufficiently imaged during the initial measurement phase and of which additional images are to be captured. The threshold separation may depend upon the region of interest and/or the subject that is being imaged.
[0116] The gaps between images 810, 820 within the arrangement may be one-dimensional angular separations between adjacent images, which may be compared to a threshold onedimensional angular separation, or may two-dimensional solid angles, which may be bounded by images and/or edges of the region of interest, which may be compared to a threshold two- dimensional solid angle.
[0117] In the example three-dimensional image arrangement 800 shown in Fig. 8, the gaps are one-dimensional separations that can be compared to a one-dimensional threshold angle. The illustrated example arrangement 800 includes one gap 830 with a separation between adjacent images exceeds the threshold. Such a gap results in a lack of information and/or detail on this portion of the imaged region of interest of the subject.
[0118] When one or more gaps 830 between images 810, 820 within the arrangement 800 larger than a threshold separation are identified 730, instructions to perform further imaging may be output 740 by the system, in a fourth stage of the method 700. Such further imaging may then be performed in an additional measurement phase.
[0119] The instructions output 740 when a gap 830 exceeding the threshold may instruct a user to initiate and/or perform an additional measurement phase. For example, the instructions may comprise a user interface element enabling a user to initiate an additional measurement phase. Alternatively, or additionally, identifying one or more gaps 830 exceeding the threshold may automatically initiate an additional measurement phase if and/or when the ultrasound probe 100 is arranged to perform another measurement phase.
[0120] In some embodiments, the instructions output when one or more gaps 830 exceeding the threshold are identified 730 may simply instruct the user to perform additional measurements while moving the ultrasound probe or to repeat their previous movement with the ultrasound probe 600.
[0121] In other embodiments, the instructions may instruct a user to reorient the probe 600 to and/or through one or more specific orientations corresponding to the threshold-exceeding gaps 830, in one or more specific directions, and/or at specific speeds. For example, the instructions may indicate one or more orientations in which to hold the ultrasound probe while performing the further imaging, the one or more orientation in which to hold the ultrasound prove corresponding to orientations of the one or more gaps. In some embodiments, during the additional measurement phase, the system may output an indication when images have been captured covering each and/or all of the identified gaps 830.
[0122] When a gap 830 is identified in an arrangement 800 comprising ultrasound images with angular separations in a single dimension, as in the example shown in Fig. 8. The instructions may be to rotate the probe about the same axis as it was rotated about in the initial measurement steps 710, 715. In some examples, the instructions may specify to rotate the probe in the opposite direction about the axis, and/or to rotate the probe more slowly than in the initial measurement steps 710, 715.
[0123] In some embodiments, the system may be configured to identify one or more features of the subject in a plurality or all of the ultrasound images captured during the measurement phase, for example, using an artificial intelligence or machine learning model.
[0124] The identified features may be anatomical structures of an imaged patient’s body, such as individual organs and/or blood vessels, and/or parts thereof. In some embodiments, identifying features may comprise identifying parts of said images showing any of a preselected group of structures, which may correspond to the region of the subject being imaged. Alternatively, identifying structures in the first and second images may comprise identifying geometric structures within the images, which may not be part of a preselected group.
[0125] An artificial intelligence or machine learning model with which the features are identified may be an artificial intelligence configured and/or trained to perform semantic segmentation.
[0126] After the one or more features are identified in the images, the positions of features that have been identified in more than one of the images may be compared between those images to determine a lateral offset between those images. Lateral motion herein refers to movement within the plane of the ultrasound image, such as in a direction towards or away from the ultrasound probe 600, or in a direction across the plane of the ultrasound image.
[0127] Such lateral offsets may be motion artefacts resulting from the subject moving during the capturing of the images. For example, when a patient whose abdomen is being imaged breathes in and out, this may displace a probe held against their skin inwards and outwards relative to some of their internal organs, and/or may displace one or more imaged internal organs within their body. For example a patient’s liver will move downwards within their body as they breathe in and will move upwards as they breathe out. This can result in organs or parts thereof being located in different positions within the patient’s body in different ultrasound images of the series.
[0128] As described above, in some embodiments, the probe is arranged and pivoted such that its imaging plane remains parallel to the midline or longitudinal axis of the patient’s body, for example, as shown in Fig. 6c. In such embodiments, movement of organs upwards, downwards, towards the probe, and/or away from the probe between images of the series will be visible.
[0129] After such a lateral offset between images has been identified, images that have been laterally offset with respect to each other may be translated within the three-dimensional arrangement to remove such lateral offsets.
[0130] In some embodiments, the three-dimensional arrangement may be, or may be reformatted into, a three-dimensional image format comprising an array of voxels. Such a format may facilitate isolation and/or analysis of individual parts of the subject, which may be represented by groups or clusters of voxels within the array. Alternatively, the three- dimensional arrangement may be reformatted into a three dimensional array or a point cloud.
[0131] A plurality of three-dimensional ultrasound images may be generated by applying the techniques described above with respect to Figs. 6 to 8. The resulting three-dimensional images may be comprise a first image and a second image that are combined using the techniques described above with respect to Figs. 1 to 5.
[0132] Implementations of the subject matter and the operations described in this specification can be realized in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be realized using one or more computer programs, i.e. , one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
[0133] While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.
[0134] Further embodiments are set out in the following clauses:
1.1 A method of imaging a subject, the method comprising: capturing a series of ultrasound images using an ultrasound probe; determining a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured, each orientation being derived from measurements of an accelerometer comprised by the ultrasound probe; and arranging each of the series of captured ultrasound images in a three- dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured.
1.2 A method according to paragraph 1.1 , further comprising: determining whether any gaps between the arranged ultrasound images exceed a threshold separation; and when one or more gaps between the arranged ultrasound images exceed the threshold separation are identified, outputting instructions to perform further imaging.
1 .3. A method according to paragraph 1 .2, wherein the instructions to perform further imaging are output by or to a graphical user interface.
1.4 A method according to paragraph 1 .2 or paragraph 1.3, wherein the instructions to perform further imaging comprise instructions indicating one or more orientations in which to position the ultrasound probe while performing the further imaging, the one or more orientations in which to position the ultrasound prove corresponding to orientations of the one or more gaps.
1.5. A method according to any of paragraphs 1.1 to 1.4, further comprising: positioning the ultrasound probe on a region of interest of the subject before capturing the series of ultrasound images; and reorienting the ultrasound probe while holding it on the region of interest while the series of ultrasound images are captured.
1.6. A method according to paragraph 1.5, wherein positioning the ultrasound probe on the region of interest comprises positioning the probe on the subject, capturing an initial ultrasound image using the ultrasound probe, identifying one or more features of the subject in the initial image, and using the one or more identified features to determine whether the probe is correctly positioned on the region of interest.
1.7. A method according to paragraph 1.6, the method comprising, when the probe is determined not to be correctly positioned, outputting instructions to reposition the ultrasound probe.
1.8. A method according to any of paragraphs 1.1 to 1.7, wherein arranging the series of captured ultrasound images in the three-dimensional arrangement comprises selecting one of the images as a reference image and arranging the remaining images relative to the reference image in the arrangement, based on the differences in the orientations in which they were captured and the orientation in which the reference image was captured.
1.9. A method according to any of paragraphs 1.1 to 1.8, further comprising: identifying a feature of the subject in a plurality of the captured ultrasound images, comparing the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images, and translating one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
1.10. A method according to any of paragraphs 1.1 to 1.9, further comprising: capturing a series of ultrasound images using an ultrasound probe whilst the ultrasound probe is rotated within a plane substantially perpendicular to the probe’s imaging plane throughout the motion.
1.11. A system for imaging a subject, the system comprising: an ultrasound probe configured to capture a series of ultrasound images; an accelerometer, comprised by the ultrasound probe; an output device for providing instructions to a user; and a processor, the processor configured to: use measurements of the accelerometer to derive a respective orientation of the ultrasound probe when each of the series of ultrasound images is captured; arrange each of the series of captured ultrasound images in a three- dimensional arrangement based on the orientation of the ultrasound probe when that ultrasound image was captured.
1.12. A system according to paragraph 1.11 , the processor further configured to: determine whether any gaps between the arranged ultrasound images exceed a threshold separation; and when one or more gaps between the arranged ultrasound images exceed the threshold separation are identified, outputting instructions to perform further imaging using the output device.
1.13. A system according to paragraph 1.11, wherein the output device is an electronic display configured to display the instructions using a graphical user interface.
1.14. A system according to any of paragraphs 1.11 to 1.13, wherein the instructions to perform further imaging comprise instructions indicating one or more orientations in which to position the ultrasound probe while performing the further imaging, the one or more orientations in which to position the ultrasound prove corresponding to orientations of the one or more gaps.
1.15. A system according to any of paragraphs 1.10 to 1.14, wherein the ultrasound probe is further configured to capture an initial ultrasound image before capturing the series of ultrasound images, and wherein the processor is further configured to identify one or more features of the subject in the initial image, and use the one or more identified features to determine whether the probe is correctly positioned on the region of interest. 1.16. A system according to any of paragraphs 1.10 to 1.15, wherein arranging the series of captured ultrasound images in the three-dimensional arrangement comprises selecting one of the images as a reference image and arranging the remaining images relative to the reference image in the arrangement, based on the differences in the orientations in which they were captured and the orientation in which the reference image was captured.
1.17. A system according to any of paragraphs 1.11 to 1.16, wherein the system is further configured to: identify a feature of the subject in a plurality of the captured ultrasound images; compare the position of the identified feature in the plurality of the captured ultrasound images to determine a lateral offset between the plurality of captured ultrasound images, and translate one or more of the plurality of captured ultrasound images to compensate for the lateral offset.
1.18. A storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to any of paragraphs 1.1 to 1.10.
2.1 A method of combining first and second three-dimensional images of a subject, the method comprising: identifying one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
2.2 A method according to paragraph 2.1 , wherein identifying the one or more shared structures of the subject shown in both the first and second images comprises identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images. 2.3. A method according to paragraph 2.2, wherein identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures.
2.4 A method according to paragraph 2.2 or paragraph 2.3, wherein the structures within each of the first and second three dimensional images are identified using a three- dimensional segmentation artificial intelligence.
2.5. A method according to paragraph 2.4, wherein the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
2.6. A method according to any of paragraphs 2.1 to 2.5, wherein the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
2.7. A method according to paragraph 2.6 wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively.
2.8. A method according to paragraph 2.6 or 2.7 when dependent upon paragraph 2.5, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by the three-dimensional segmentation artificial intelligence.
2.9. A method according to any of paragraphs 2.1 to 2.8, wherein the transformation is determined using an artificial intelligence model configured to receive the three-dimensional representation of the one or more shared structures in the first image and the three- dimensional representation of the one or more shared structures in the second image as inputs.
2.10. A method according to any of paragraphs 2.1 to 2.9, wherein the transformation is a transformation matrix. 2.11. A system for combining first and second three-dimensional images of a subject, the system configured to: identify one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
2.12. A system according to paragraph 2.11 , configured to identify the one or more shared structures of the subject shown in both the first and second images by identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
2.13. A system according to paragraph 2.11 or paragraph 2.12, configured to identify structures within each of the first and second three dimensional images using a three- dimensional segmentation artificial intelligence.
2.14. A system according to paragraph 2.13, wherein the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
2.15. A system according to any of paragraphs 2.11 to 2.14, wherein the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
2.16. A system according to paragraph 2.15, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively. 2.17. A system according to paragraph 2.15 when dependent upon paragraph 2.14, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by the three-dimensional segmentation artificial intelligence.
2.18. A system according to any of paragraphs 2.11 to 2.17 configured to determine the transformation using an artificial intelligence model configured to receive the three- dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.
2.19. A storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to any of paragraphs 2.1 to 2.10.

Claims

CLAIMS A method of combining first and second three-dimensional images of a subject, the method comprising: identifying one or more shared structures of the subject shown in both of the first and second images by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three-dimensional images; determining a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image. A method as claimed in claim 1 , wherein the structures identified using artificial intelligence belong to a preselected group of structures corresponding to a region of the subject imaged in the first and second images. A method according to claim 2, wherein identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures. A method according to any preceding claim, wherein the artificial intelligence is a machine learning model trained using a training data set comprising three- dimensional images. A method according to any of claims 1 to 3, wherein the artificial intelligence is a three-dimensional segmentation artificial intelligence. A method according to claim 5, wherein the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed. A method as claimed in any preceding claim, wherein identifying the structures using artificial intelligence comprising using artificial intelligence to identify the structures within each of a set of two-dimensional slices defining the respective three- dimensional image.
8. A method according to any preceding claim, wherein identifying the one or more shared structures of the subject shown in both the first and second images comprises identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
9. A method according to any preceding claim, wherein the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
10. A method according to claim 9, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively.
11. A method according to claim 9, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by the artificial intelligence.
12. A method according to any preceding claim, wherein the transformation is determined using a further artificial intelligence model configured to receive the three- dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.
13. A method according to any preceding claim, wherein the transformation is a transformation matrix.
14. A system for combining first and second three-dimensional images of a subject, the system configured to: identify one or more shared structures of the subject shown in both of the first and second images by, for each of the first and second images, using artificial intelligence to identify structures within each of the respective three-dimensional images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
15. A storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to any of claims 1 to 13.
PCT/GB2023/052794 2022-10-25 2023-10-25 System and method for three-dimensional imaging WO2024089423A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB2215790.3A GB2623771A (en) 2022-10-25 2022-10-25 Combining three-dimensional images
GB2215789.5 2022-10-25
GB2215790.3 2022-10-25
GB2215789.5A GB2623770A (en) 2022-10-25 2022-10-25 Ultrasound imaging

Publications (1)

Publication Number Publication Date
WO2024089423A1 true WO2024089423A1 (en) 2024-05-02

Family

ID=88731715

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2023/052794 WO2024089423A1 (en) 2022-10-25 2023-10-25 System and method for three-dimensional imaging

Country Status (1)

Country Link
WO (1) WO2024089423A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2019222619A1 (en) * 2018-02-14 2020-09-24 Elekta, Inc. Atlas-based segmentation using deep-learning
WO2021209348A1 (en) * 2020-04-16 2021-10-21 Koninklijke Philips N.V. Bi-plane and three-dimensional ultrasound image acquisition for generating roadmap images, and associated systems and devices
WO2022094554A1 (en) * 2020-10-27 2022-05-05 Mako Surgical Corp. Ultrasound based multiple bone registration surgical systems and methods of use in computer-assisted surgery

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2019222619A1 (en) * 2018-02-14 2020-09-24 Elekta, Inc. Atlas-based segmentation using deep-learning
WO2021209348A1 (en) * 2020-04-16 2021-10-21 Koninklijke Philips N.V. Bi-plane and three-dimensional ultrasound image acquisition for generating roadmap images, and associated systems and devices
WO2022094554A1 (en) * 2020-10-27 2022-05-05 Mako Surgical Corp. Ultrasound based multiple bone registration surgical systems and methods of use in computer-assisted surgery

Similar Documents

Publication Publication Date Title
US10542955B2 (en) Method and apparatus for medical image registration
US9468422B2 (en) Sensor coordinate calibration in an ultrasound system
US10362941B2 (en) Method and apparatus for performing registration of medical images
US9495725B2 (en) Method and apparatus for medical image registration
KR101932721B1 (en) Method and Appartus of maching medical images
JP5334692B2 (en) Method and system for detecting 3D anatomical objects using constrained marginal space learning
US20120114208A1 (en) Image matching device and patient positioning device using the same
US20100123715A1 (en) Method and system for navigating volumetric images
US11468589B2 (en) Image processing apparatus, image processing method, and program
US9135696B2 (en) Implant pose determination in medical imaging
WO2014134188A1 (en) Systems and methods for ultrasound imaging
US20210374452A1 (en) Method and device for image processing, and elecrtonic equipment
CN1918601A (en) Apparatus and method for registering images of a structured object
US20190392552A1 (en) Spine image registration method
US9545242B2 (en) Sensor coordinate calibration in an ultrasound system
KR20130016942A (en) Method and apparatus for generating 3d volume panorama
US20200305837A1 (en) System and method for guided ultrasound imaging
WO2024089423A1 (en) System and method for three-dimensional imaging
JP2017515572A (en) Acquisition orientation dependent features for model-based segmentation of ultrasound images
GB2623770A (en) Ultrasound imaging
Khanal et al. EchoFusion: tracking and reconstruction of objects in 4D freehand ultrasound imaging without external trackers
Xie et al. Stabilized ultrasound imaging of a moving object using 2D B-mode images and convolutional neural network
WO2021117349A1 (en) Information processing device, information processing system, information processing method, and information processing program
GB2623771A (en) Combining three-dimensional images
CN110120027B (en) CT slice image enhancement method and device for machine learning system data