GB2623771A - Combining three-dimensional images - Google Patents

Combining three-dimensional images Download PDF

Info

Publication number
GB2623771A
GB2623771A GB2215790.3A GB202215790A GB2623771A GB 2623771 A GB2623771 A GB 2623771A GB 202215790 A GB202215790 A GB 202215790A GB 2623771 A GB2623771 A GB 2623771A
Authority
GB
United Kingdom
Prior art keywords
image
structures
images
dimensional
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2215790.3A
Other versions
GB202215790D0 (en
Inventor
Carrier Jason
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Through Leaves Ltd
Original Assignee
Through Leaves Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Through Leaves Ltd filed Critical Through Leaves Ltd
Priority to GB2215790.3A priority Critical patent/GB2623771A/en
Publication of GB202215790D0 publication Critical patent/GB202215790D0/en
Priority to PCT/GB2023/052794 priority patent/WO2024089423A1/en
Publication of GB2623771A publication Critical patent/GB2623771A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images

Abstract

A first invention relates to a method of merging first and second 3D images of a subject by identifying shared structures of the subject shown in both of the first and second images 110. A transformation is determined for mapping a three-dimensional representation of the shared structures in the first image onto a three-dimensional representation of the shared structures in the second image 120. The determined transformation is applied to the first image and a composite image is formed by combining the transformed first image and the second image 130. Preferably, the structures are identified using three-dimensional segmentation artificial intelligence. Preferably, the three-dimensional representation of the shared structures in both images are point clouds. Preferably, the point clouds are derived from segmented-out copies of the shared structures, wherein the segmented-out copies comprise the image occupied by the structure, with the remainder of the image being substantially removed. A second invention relates to a system containing features of the first invention. A third invention relates to storage medium comprising instructions to execute the method of the first invention.

Description

Combining Three-Dimensional Images
FIELD
[0001] Embodiments described herein relate to methods for systems for combining three-dimensional images of a subject.
BACKGROUND
[0002] Many three-dimensional imaging techniques, such as three-dimensional medical ultrasound imaging, are only suitable for obtaining images of limited portions of a subject, such as a patient's body. In some situations it may be necessary to analyse portions of the subject which extend across multiple such three-dimensional images, which can introduce difficulties in such analysis.
[0003] An aim of the present invention is to provide means for combining a plurality of three-dimensional images of a subject into a single three dimensional image.
SUMMARY OF THE INVENTION
[0004] According to an embodiment, there is provided a method of combining first and second three-dimensional images of a subject, the method comprising: identifying one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
[0005] Such a method enables a composite image to be constructed from first and second three-dimensional images of a subject that show one or more of the same features, without requiring the relative arrangement of the first and second three dimensional images to be known.
[0006] In some embodiments, identifying the one or more shared structures of the subject shown in both the first and second images comprises identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
[0007] In some embodiments, identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures.
[0008] In some embodiments, the structures within each of the first and second three dimensional images are identified using a three-dimensional segmentation artificial intelligence.
[0009] In some embodiments, the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
[0010] In some embodiments, the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
[0011] In some embodiments, the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively.
[0012] In some embodiments, the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by a three-dimensional segmentation artificial intelligence as described above.
[0013] In some embodiments, the transformation is determined using an artificial intelligence model configured to receive the three-dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.
[0014] In some embodiments, the transformation is a transformation matrix.
[0015] According to another embodiment, there is provided a system for combining first and second three-dimensional images of a subject, the system configured to: identify one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
[0016] In some embodiments, the system is configured to identify the one or more shared structures of the subject shown in both the first and second images by identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
[0017] In some embodiments, the system is configured to identify structures within each of the first and second three dimensional images using a three-dimensional segmentation artificial intelligence.
[0018] In some embodiments, the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
[0019] In some embodiments, the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
[0020] In some embodiments, the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively.
[0021] In some embodiments, the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by a three-dimensional segmentation artificial intelligence as described above.
[0022] In some embodiments, the system is configured to determine the transformation using an artificial intelligence model configured to receive the three-dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.
[0023] According to another embodiment, there is provided a storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method as described herein.
BRIEF DESCRIPTION OF THE FIGURES
[0024] Fig. 1 is a flowchart of an embodiment of a method of combining three dimensional images [0025] Fig. 2a is a three-dimensional image of a left lobe of a patient's liver; [0026] Fig. 2b is a three-dimensional image of a right lobe of the patient's liver shown in Fig. 2a; [0027] Fig. 3a shows a three-dimensional representation of a gallbladder extracted from the image of Fig. 2a; [0028] Fig. 3b shows a three-dimensional representation of a gallbladder extracted from the image of Fig. 2b; [0029] Fig. 4 shows a combination of the images of Figs. 2a and 2b using a transformation based on a transformation generated from the extracted gallbladder representations of Figs. 3a and 3b.
DETAILED DESCRIPTION
[0030] Referring to figures 1 to 5 generally, embodiments of methods and systems for combining multiple three-dimensional images of a subject, in which structures within first and second images are identified, and used to determine a transformation for mapping one of the images onto the other.
[0031] Such methods and systems may be used to combine overlapping three dimensional images of a subject, such as three-dimensional ultrasound images of adjacent parts of a patient's body into a single combined image, without requiring the images to be captured in a known or fixed orientation.
[0032] The three dimensional images may each be or comprise a three dimensional array; a voxel based three-dimensional image, a point cloud, an array of two-dimensional tomographic images or slices in a fixed arrangement, or any other suitable three-dimensional image format. For example, the image may comprise a series of two-dimensional tomographic images or slices captured as an imaging device is moved and/or rotated relative to a subject. Individual tomographic images or slices may have associated labels or timestamps associated therewith which may indicate their arrangement within the three-dimensional image.
[0033] The image may be in a file format such as.nifti, .npy, or.ply, which are widely used in three-dimensional medical imaging. In some embodiments, the method may comprise transforming one or both of the images into such a format.
[0034] The three dimensional images may be medical images and the subject may be a patient's body, such as a human patient's body, wherein the methods or systems are medical imaging methods or systems. However it will be appreciated that in alternative embodiments, the subject may be a non-human animal body or an inanimate device or object. The two three-dimensional images may be of different, but at least partially overlapping regions of the subject.
[0035] The three dimensional images may be magnetic resonance imaging (MRI) images, computerized tomography (CT) scan images, ultrasound images, positron emission tomography (PET) images, or any other suitable form of three-dimensional images.
[0036] In some embodiments, one or both the three dimensional images may comprise associated metadata, for example, indicating a portion of a patient shown in the subject, and/or an orientation of the three-dimensional images relative to the subject.
[0037] Fig. 1 shows a flowchart 100 of a method of an embodiment of a method of combining two three-dimensional images of a subject.
[0038] In a first step 110 of the method, one or more structures of the subject shown in both of the two images are identified. The structures may be anatomical structures of an imaged body, such as individual organs, blood vessels and/or components thereof.
[0039] Identifying the shared structures identifies parts of the subject that are at least partially shown in both images, enabling the different positions and orientations of these structures in the two images to be used to determine the relative positions and orientations of the portions of the subject shown in the two images.
[0040] Identifying the structures shown in both of the images may comprise identifying a first set of structures shown in the first image and a second set of structures shown in the second image. The first and second sets may then be compared to identify those structures shown in both the first and second images. Identifying structures in each of the first and second images may comprise detecting and labelling the detected structures.
[0041] In some embodiments, identifying structures in the first and second images may comprise identifying parts of said images showing any of a preselected group of structures. The preselected group of structures may correspond to the region of the subject being imaged, and different preselected groups of structures may be used for images of different parts of the subject. For example, when identifying structures in an image of a patient's abdomen the preselected group of structures may comprise the liver, the inferior vena cava (IVC), the aorta, the hepatic veins, the portal veins, and/or the kidney.
[0042] Alternatively or additionally, identifying structures in the first and second images may comprise identifying geometric structures within the images, which may not be part of a preselected group.
[0043] The structures may be identified within the two three-dimensional images using an artificial intelligence system configured to identify structures of the subject (such as structures of a preselected group as described above) within three-dimensional images, such as a machine learning model trained using a data set comprising a plurality of three-dimensional images as described herein.
[0044] In some embodiments, the structure identifying artificial intelligence is an artificial intelligence configured and/or trained to perform semantic segmentation of three-dimensional images, such as a U-Net or V-Net convolutional neural network or variant thereof. The artificial intelligence may receive a three-dimensional image as an input and may output the identity and location within the image of each structure identified therein.
In some embodiments, the structure-identifying artificial intelligence may be configured to output a segmented-out copy of one, some or each of the structures identified in the image. The segmented-out copy of a structure comprising the portion of the input three-dimensional image occupied by that structure, with the remainder of the image substantially removed.
[0045] Three-dimensional images may be input to the artificial intelligence in a three-dimensional array format such as an.npy file. In such embodiments, the method may comprise converting the three-dimensional images into such a three-dimensional array format, such as an.npy file, if they are not already in such a format. Any segmented out copies of identified structures may also be output in such a three-dimensional array format.
[0046] The structure-identifying artificial intelligence may be a machine learning model trained using a training data set comprising three-dimensional images, such as three-dimensional medical images, and associated identities and locations of a set of structures in each image. In some embodiments, the training data may further comprise segmented out copies of one, some, or each of the set of structures in each image.
[0047] In some embodiments, the structure-identifying artificial intelligence is configured and/or trained to identify structures in medical images of a specific part of a patient's body. A plurality of such structure-identifying artificial intelligences may be configured and/or trained to identify structures in images of different parts of a patient's body, and the structures in the first and second images may be identified using artificial intelligences corresponding to the parts of a patient's body shown therein.
[0048] Alternatively, or additionally, the structures may be identified in each image of an array of two-dimensional tomographic images or slices defining a three-dimensional image, using an artificial intelligence or machine learning system configured to identify structures of the subject (such as structures of a preselected group as described above) within two-dimensional images or slices, for example, an artificial intelligence configured and/or trained to perform semantic segmentation of two-dimensional images. In such embodiments, the structures identified in a three-dimensional image may be all those identified in the two-dimensional images or slices defining said three-dimensional image, and the location of an identified structure within the three-dimensional image may be defined by the location of portions of that structure shown in one or more of the two-dimensional images or slices.
[0049] Alternatively, or additionally, structures may be identified within the two images based on boundaries between regions within the three-dimensional images.
[0050] After the structures of the subject shown in each of the two images are identified, one or more shared structures that have been identified in both the first and second images is determined. In some embodiments, the one or more shared structures may be all of the structures that have been identified in both of the two images. In alternative embodiments, the one or more shared structures may be a subset of the structures that have been identified in both of the two images.
[0051] In a second step 120 of the method 100, a three-dimensional representation of the one or more shared structures in the first image and a three-dimensional representation of the one or more shared images in the second image are used to determine a transformation for mapping the shared structures in one of the images onto the identified structures in the other of the images. In embodiments such as the illustrated example, the transformation is for mapping the shared structures in the first image onto the shared structures in the second image.
[0052] In some embodiments, the one or more shared structures may be a plurality of shared structures (or a plurality of shared structures whenever more than one structure has been identified in both of the images), which may improve the precision and/or accuracy with which the mapping is determined.
[0053] The three-dimensional representation of the shared structures in each image may represent all of the determined shared structures in that image, in the locations that they were identified within that image. The three-dimensional representations may exclude portions of the images other than the shared structures. This may provide sparsity into the representations, which may facilitate determining the mapping from one image to the other.
[0054] In some embodiments, the three-dimensional representations of the shared structures in the two images may be, or may be derived from, segmented out copies of those structures identified in the image. The segmented-out copies of the structures comprising the portions of the input three-dimensional image occupied by those structures, substantially excluding elements other than the structures. Such segmented out copies of the structures are kept spatially congruent to their positions in the original images. The segmented-out copies of the shared structures may be output by a structure identifying artificial intelligence as described above. Segmenting out the shared structures in this manner may introduce sparsity into the images and the representations of the shared structures therein as described above.
[0055] In some embodiments, the three-dimensional representation of the one or more shared structures within an image may be derived from individual segmented-out copies of each of the one or more shared structures in that image, for example, as output by the structure-identifying artificial intelligence when each structure is identified. In other embodiments, after the one or more shared structures are determined, a segmented-out copy of all of the one or more shared structures may be derived from each image, and may define or be used to derive the three-dimensional representation of the one or more shared structures.
[0056] The three dimensional representations of the one or more shared structures in the first and second images may be first and second point clouds respectively.
[0057] If the segmented-out copies of the one or more shared structures are already in a point cloud format, such point cloud representations may be used in this second step. If they are not already in a point cloud format, for example if they are in a three-dimensional array format such as a.npy file as described above, the method may comprise transforming them into a point cloud representation.
[0058] Figs 2a and 2b show examples of two overlapping three-dimensional medical images 200, 250 of portions of a patient's liver. Fig. 2a shows the left lobe of the liver and fig. 2b shows the right lobe of the liver. Both of the two three-dimensional images include the patient's gallbladder.
[0059] Figs 3a and 3b each show front and rear views of an extracted segmented-out copies of a gallbladder 300, 350 that has been identified in each of the two images 200, 250 and extracted therefrom. The segmented out gallbladders 300, 350 are three-dimensional representations of the gallbladders identified in the two images. The segmented out gallbladders 300, 350 retaine the locations in which they are located in the original images 200, 250, such that a mapping or transformation of one segmented out gallbladder 300, 350 to the other will also accurately map the two images 200, 250 onto each other.
[0060] The first and second three-dimensional representations of the one or more shared structures may be down-sampled before being used to determine the transformation, preferably in the same manner. Point cloud representations may be downsampled to between 5,000 and 10,000 points, for example, from multi-million-point representation obtained from the images.
[0061] The transformation may be determined using an artificial intelligence and/or machine learning model, which may be configured to receive the two three-dimensional representations of the identified structures in the two images, such as point cloud representations, as inputs and to output the transformation for mapping one three-dimensional representation onto the other as an output. The transformation-determining artificial intelligence and/or machine learning model may be a point cloud registration artificial intelligence, such as a PointNet neural network or variant thereof. The transformation for mapping one point cloud onto the other may be a transformation matrix.
[0062] The transformation-determining artificial intelligence may be a machine learning model trained using a training data set comprising three-dimensional representations of objects in two different locations, such as point cloud representations, which may comprise representations of anatomical features, for example from performing steps 110, 120 of the method as described above, and/or generic non-anatomical structures, such as artificially created three-dimensional representations of abstract three-dimensional shapes. In some embodiments, training the transformation-determining machine learning model may comprise initially training the model using three-dimensional representations, such as point cloud representations, of generic non-anatomical structures and subsequently training the model using three-dimensional representations of anatomical structures. This training approach may quickly train the model to perform transformations before fine-tuning it for use on anatomical structures as in the present invention.
[0063] In a third step 130 of the method 100, the determined transformation is used to transform one of the two images and the transformed image is combined with the other image to form a composite image. In embodiments such as the illustrated example, the second image is transformed and is then combined with the first image to form a composite image.
[0064] Transforming one of the images using the determined transformation effectively results in that image having the same coordinate system as the other image. The two images may then be combined, for example by overlaying them to result in a composite image. The combined image provides a contiguous three-dimensional image covering the portions of the subject shown in both of the two objects. The combined image may be subsequently saved or output in any suitable format.
[0065] The two images may be combined in three dimensional array formats, such as.npy files. Combining the two images may comprise may comprise defining an array and adding the two images to the array. The array may be defined with a size sufficient to contain both three dimensional images in their entirety. At points where the two combined images overlap, values of one the two images may be selected, or the values the two images may be averaged, in some embodiments with a weighted average.
[0066] Fig. 4 shows a composite image produced by combining the three-dimensional images shown in Figs 2a and 2b after the image of the right lobe of liver shown in Fig. 2b has been transformed using a transformation determined from point cloud representations off the extracted gallbladders shown in Figs. 3a and 3b.
[0067] In some embodiments, the method may comprise further combining the composite image with another image in the same manner as described above. This may be performed by repeating the steps of the method with the composite image being one of the two images for said repeated steps. Such a process may be repeated any number of times to combine a range of images of different parts of the subject.
[0068] Fig. 5 is a flowchart of specific embodiment of a method 500 as described herein in which two three-dimensional anatomical images, or "voxels" are combined to produce a larger contiguous anatomy. The method 500 comprises segmenting out and listing 510 anatomical structures of shown in each of the two images, and then comparing 520 the list of anatomical structures to shortlist those anatomical features present in both images.
After this shortlist is created, a point cloud of the shortlisted anatomical structures is created 530 for each of the two images, and a point cloud registration neural network 540 is used to generate a transformation matrix from the point clouds. The transformation matrix is then applied 550 to the second of the three-dimensional images to align it with the first three-dimensional image and the aligned images are then combined. If more than two images are provided, the method may be repeated as needed 560 until all of the images have been combined.
[0069] Implementations of the subject matter and the operations described in this specification can be realized in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be realized using one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
[0070] While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.

Claims (19)

  1. CLAIMS1. A method of combining first and second three-dimensional images of a subject, the method comprising: identifying one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and applying the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
  2. 2. A method according to claim 1, wherein identifying the one or more shared structures of the subject shown in both the first and second images comprises identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
  3. 3. A method according to claim 2, wherein identifying structures in the first and second images comprises identifying parts of said images showing any of a preselected group of structures.
  4. 4. A method according to claim 2 or claim 3, wherein the structures within each of the first and second three dimensional images are identified using a three-dimensional segmentation artificial intelligence.
  5. 5. A method according to claim 4, wherein the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.
  6. 6. A method according to any preceding claim, wherein the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.
  7. 7. A method according to claim 6 wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively.
  8. 8. A method according to claim 6 when dependent upon claim 5, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by the three-dimensional segmentation artificial intelligence.
  9. 9. A method according to any preceding claim, wherein the transformation is determined using an artificial intelligence model configured to receive the three-dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.
  10. 10. A method according to any preceding claim, wherein the transformation is a transformation matrix.
  11. 11. A system for combining first and second three-dimensional images of a subject, the system configured to: identify one or more shared structures of the subject shown in both of the first and second images; determine a transformation mapping a three-dimensional representation of the one or more shared structures in the first image onto a three-dimensional representation of the one or more shared structures in the second image; and apply the determined transformation to the first image and combining the transformed first image and the second image into a composite image.
  12. 12. A system according to claim 11, configured to identify the one or more shared structures of the subject shown in both the first and second images by identifying structures of the subject within each of the first and second images and determining one or more structures that are present in both the first and second images.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.A system according to claim 11 or claim 12, configured to identify structures within each of the first and second three dimensional images using a three dimensional segmentation artificial intelligence.A system according to claim 13, wherein the three-dimensional segmentation artificial intelligence outputs a segmented-out copy of at least one of the structures identified in the first and second image, the segmented-out copy of a structure comprising the portion of the first or second image occupied by that structure, with the remainder of the first or second image substantially removed.A system according to any preceding claim, wherein the three-dimensional representation of the one or more shared structures in the first image is a first point cloud and the three-dimensional representation of the one or more shared structures in the second image is a second point cloud, and wherein the transformation is determined using the first and second point clouds.A system according to claim 15, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively.A system according to claim 15 when dependent upon claim 14, wherein the first and second point clouds are derived from segmented-out copies of the one or more shared structures in the first and second images respectively that are output by the three-dimensional segmentation artificial intelligence.A system according to any of claims 11 to 17 configured to determine the transformation using an artificial intelligence model configured to receive the three-dimensional representation of the one or more shared structures in the first image and the three-dimensional representation of the one or more shared structures in the second image as inputs.A storage medium comprising computer instructions executable by one or more processors, the computer instructions when executed by the one or more processors causing the one or more processors to perform a method according to any of claims 1 to 10
GB2215790.3A 2022-10-25 2022-10-25 Combining three-dimensional images Pending GB2623771A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2215790.3A GB2623771A (en) 2022-10-25 2022-10-25 Combining three-dimensional images
PCT/GB2023/052794 WO2024089423A1 (en) 2022-10-25 2023-10-25 System and method for three-dimensional imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2215790.3A GB2623771A (en) 2022-10-25 2022-10-25 Combining three-dimensional images

Publications (2)

Publication Number Publication Date
GB202215790D0 GB202215790D0 (en) 2022-12-07
GB2623771A true GB2623771A (en) 2024-05-01

Family

ID=84818601

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2215790.3A Pending GB2623771A (en) 2022-10-25 2022-10-25 Combining three-dimensional images

Country Status (1)

Country Link
GB (1) GB2623771A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016201006A1 (en) * 2015-06-08 2016-12-15 The Board Of Trustees Of The Leland Stanford Junior University 3d ultrasound imaging, associated methods, devices, and systems
US20200342614A1 (en) * 2019-04-24 2020-10-29 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for point cloud registration, and computer readable medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016201006A1 (en) * 2015-06-08 2016-12-15 The Board Of Trustees Of The Leland Stanford Junior University 3d ultrasound imaging, associated methods, devices, and systems
US20200342614A1 (en) * 2019-04-24 2020-10-29 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for point cloud registration, and computer readable medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FU ET AL. "Point cloud overlapping region co-segmentation network." Published 2020, Proceedings of Machine Learning Research. *
KAMENCAY ET AL. "Improved feature point algorithm for 3D point cloud registration." Published July 2019, IEEE. *
ZHANG ET AL. "Deep learning based point cloud registration: an overview". Published June 2020, Virtual Reality & Intelligent Hardware. *

Also Published As

Publication number Publication date
GB202215790D0 (en) 2022-12-07

Similar Documents

Publication Publication Date Title
CN109285200B (en) Multimode medical image conversion method based on artificial intelligence
Gao et al. FocusNet: imbalanced large and small organ segmentation with an end-to-end deep neural network for head and neck CT images
CN109685060B (en) Image processing method and device
US11593943B2 (en) RECIST assessment of tumour progression
Larsson et al. Robust abdominal organ segmentation using regional convolutional neural networks
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN105096310B (en) Divide the method and system of liver in magnetic resonance image using multi-channel feature
CN112150428A (en) Medical image segmentation method based on deep learning
CN109949349B (en) Multi-mode three-dimensional image registration and fusion display method
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
Benčević et al. Training on polar image transformations improves biomedical image segmentation
Punn et al. Multi-modality encoded fusion with 3D inception U-net and decoder model for brain tumor segmentation
US20220058798A1 (en) System and method for providing stroke lesion segmentation using conditional generative adversarial networks
Ellis et al. Deep learning using augmentation via registration: 1st place solution to the AutoImplant 2020 challenge
CN112581458B (en) Image processing method and device
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
CN112150472A (en) Three-dimensional jaw bone image segmentation method and device based on CBCT (cone beam computed tomography) and terminal equipment
van Wijnen et al. Automated lesion detection by regressing intensity-based distance with a neural network
AlZu'bi et al. Transferable HMM probability matrices in multi‐orientation geometric medical volumes segmentation
Oda et al. 3D FCN feature driven regression forest-based pancreas localization and segmentation
CN108596900B (en) Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment
GB2623771A (en) Combining three-dimensional images
CN113327221A (en) Image synthesis method and device fusing ROI (region of interest), electronic equipment and medium
Zhang et al. Two stage of histogram matching augmentation for domain generalization: application to left atrial segmentation