CN111898657A - Image matching method and device - Google Patents

Image matching method and device Download PDF

Info

Publication number
CN111898657A
CN111898657A CN202010677668.5A CN202010677668A CN111898657A CN 111898657 A CN111898657 A CN 111898657A CN 202010677668 A CN202010677668 A CN 202010677668A CN 111898657 A CN111898657 A CN 111898657A
Authority
CN
China
Prior art keywords
target
image
region
determining
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010677668.5A
Other languages
Chinese (zh)
Inventor
崔彤哲
孙毅
周永新
段明磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hinacom Software And Technology Ltd
Original Assignee
Hinacom Software And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hinacom Software And Technology Ltd filed Critical Hinacom Software And Technology Ltd
Priority to CN202010677668.5A priority Critical patent/CN111898657A/en
Publication of CN111898657A publication Critical patent/CN111898657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses an image matching method and device. Wherein, the method comprises the following steps: removing non-target areas in a plurality of target images to be matched, and determining target areas needing to be matched in the target images; and matching the plurality of target images according to the target areas, and determining a matching image obtained by superposing the target areas of the plurality of target images under the condition that the superposition areas of the target areas of the plurality of target images are maximum. The invention solves the technical problems of low efficiency and low accuracy rate because an observer needs to independently identify and match the visceral organs and parts in the CT images in different periods completely depending on personal experience in the related technology.

Description

Image matching method and device
Technical Field
The invention relates to the field of image processing, in particular to an image matching method and device.
Background
CT (Computed Tomography) images are a common means for determining the location and specific condition of a lesion, and a doctor is required to directly determine the lesion and infer the disease condition by visually observing the CT images according to experience. Due to the influences of factors such as medical instrument interference, visceral activity and growth, the shapes of lung CT images obtained by the same patient at different moments are greatly different, the difficulty of follow-up visit of a doctor for lung diseases is increased, and the doctor visually compares different CT images, so that the problems of low efficiency and low accuracy in organ and focus identification according to the CT images are caused.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image matching method and device, which at least solve the technical problems that in the related technology, an observer is required to independently identify and match visceral organs and parts in CT images in different periods, the efficiency is low and the accuracy is low due to the fact that the observer completely depends on personal experience.
According to an aspect of the embodiments of the present invention, there is provided an image matching method, including: removing non-target areas in a plurality of target images to be matched, and determining target areas needing to be matched in the target images; and matching the target images according to the target area, and determining a matching image obtained by superposing the target areas of the target images under the condition that the superposition areas of the target images are maximum.
Optionally, removing non-target areas in the multiple target images to be matched, and determining a target area that needs to be matched in the target image includes: performing threshold segmentation processing on the target image to generate a binary image of the target image; carrying out corrosion operation on the binary image of the target image to remove the adhesion of each part of image in the binary image; and removing non-target areas in the binary image in an area generation mode, and determining the target area of the target image.
Optionally, performing threshold segmentation processing on the target image, and generating a binary image of the target image includes: performing threshold segmentation on the target image through a threshold segmentation algorithm to determine an optimal threshold of the target image; judging whether the optimal threshold is within a preset threshold range; under the condition that the optimal threshold is within the preset threshold range, performing threshold segmentation on the target image according to the optimal threshold to generate the binary image; and under the condition that the optimal threshold is not in the preset threshold range, arbitrarily appointing a value in the preset threshold range as the optimal threshold, and generating the binary image according to the appointed optimal threshold.
Optionally, before performing an erosion operation on the binary image of the target image to remove adhesion of each part of the image in the binary image, the method further includes: performing down-sampling processing on the binary image of the target image; and carrying out hole filling processing on the binary image after the down-sampling processing.
Optionally, matching the plurality of target images according to the target region, and determining that the matching image obtained by superimposing the target regions of the plurality of target images on the target region is the largest, where the matching image obtained by superimposing the target regions of the plurality of target images includes: determining a plurality of components of the target region for each of the target images; and matching the plurality of target images according to the plurality of components, and determining a matching image obtained by superposing the target regions of the plurality of target images under the condition that the superposition areas of the target regions of the plurality of target images are maximum.
Optionally, determining a plurality of components of the target region of each of the target images includes: performing region communication processing on the target region of the target image, and determining a plurality of communication regions of the target region of the target image; and respectively determining the corresponding components of the plurality of connected areas.
Optionally, respectively determining the components corresponding to the multiple connected regions includes: determining a bounding box for a plurality of the connected regions; determining the size relation of the sizes of the bounding boxes of the plurality of the connected regions under the condition that the sizes of the bounding boxes of the plurality of the connected regions do not exceed a preset size threshold; and under the condition that the size relation of the dimensions of the bounding box is the same as the size relation of the dimensions of the real components, determining that the real components corresponding to the bounding box correspond to the connected regions corresponding to the bounding box.
Optionally, performing region connectivity processing on the target region of the target image, and determining a plurality of connected regions of the target region of the target image includes: carrying out eight-connected processing on the target area of the target image, and determining a plurality of connected areas and bounding boxes of the connected areas; determining whether the size of the bounding box of the connected region meets a preset condition, and discarding the connected region under the condition that the size of the bounding box does not meet the preset condition; reserving the communication area under the condition that the size of the bounding box meets a preset condition; and determining whether the size of the bounding box of the communication area exceeds a preset size threshold, and performing corrosion operation on the communication area to perform eight-communication processing to determine a plurality of communication areas under the condition that the size of the bounding box exceeds a second preset size threshold.
Optionally, performing region connectivity processing on the target region of the target image, and before determining a plurality of connected regions of the target region of the target image, further includes: and performing open operation reconstruction processing on the target region of the target image.
Optionally, after performing region connectivity processing on the target region of the target image and determining a plurality of connected regions of the target region of the target image, the method further includes: performing expansion processing on the target area of the target image after the area communication processing; and according to the initial target image after the down-sampling processing and the cavity filling, performing shape restoration on the target image after the expansion processing.
According to another aspect of the embodiments of the present invention, there is also provided an image matching apparatus, including: the device comprises a determining module, a matching module and a matching module, wherein the determining module is used for removing non-target areas in a plurality of target images to be matched and determining target areas needing to be matched in the target images; and the matching module is used for matching the target images according to the target areas and determining a matching image obtained by superposing the target areas of the target images under the condition that the superposition areas of the target images are the largest.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium, where the storage medium includes a stored program, and when the program runs, a device in which the storage medium is located is controlled to execute the image matching method according to any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to run a program, where the program executes the image matching method described in any one of the above.
In the embodiment of the invention, non-target areas in a plurality of target images to be matched are removed, and the target area needing to be matched in the target images is determined; the method comprises the steps of matching a plurality of target images according to a target area, determining the position with the maximum overlapping area by matching the target areas of different target images in a mode of matching images overlapped by the target areas of the target images under the condition that the overlapping areas of the target images are maximum, determining the position with the maximum overlapping area, and obtaining the heaviest matching result of the target images.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a method of matching images according to an embodiment of the invention;
FIG. 2 is a flow chart of a lung CT image registration method according to an embodiment of the present invention;
FIG. 3-1 is a schematic representation of a cross-sectional location of a first image of a lung after a thresholding process in accordance with an embodiment of the invention;
FIG. 3-2 is a schematic representation of a coronal view of a first image of a lung after thresholding in accordance with an embodiment of the present invention;
3-3 are schematic diagrams of a first image vector of a lung after thresholding in accordance with an embodiment of the invention;
3-4 are schematic illustrations of a cross-sectional location of a second image of a lung after thresholding in accordance with an embodiment of the invention;
3-5 are schematic illustrations of a coronal elevation of a second image of a lung after thresholding in accordance with an embodiment of the present invention;
FIGS. 3-6 are schematic diagrams of a second image vector of the lung after thresholding in accordance with an embodiment of the invention;
FIG. 4-1 is a schematic illustration of a cross-sectional bit of a first image after down-sampling and hole filling according to an embodiment of the present invention;
FIG. 4-2 is a schematic diagram of a first image crown after downsampling and hole filling according to an embodiment of the present invention;
4-3 are schematic diagrams of first image vectors after downsampling and hole filling according to embodiments of the present invention;
4-4 are schematic diagrams of a second image cross-sectional bit after down-sampling and hole filling according to embodiments of the present invention;
4-5 are schematic diagrams of a second image crown after downsampling and hole filling according to embodiments of the present invention;
FIGS. 4-6 are schematic diagrams of a second image vector after downsampling and hole filling according to an embodiment of the present invention;
FIG. 5-1 is a schematic illustration of a first image cross-sectional location after removal of a non-lung region according to an embodiment of the present invention;
FIG. 5-2 is a schematic illustration of a coronal view of a first image after removal of a non-lung region according to an embodiment of the present invention;
5-3 are schematic diagrams of a first image vector after removal of the non-lung region according to an embodiment of the present invention;
FIGS. 5-4 are schematic illustrations of a second image cross-sectional location after removal of a non-lung region in accordance with an embodiment of the present invention;
FIGS. 5-5 are schematic illustrations of a coronal view of a second image after removal of a non-lung region in accordance with an embodiment of the present invention;
FIGS. 5-6 are schematic diagrams of a second image vector after removal of the non-lung region according to embodiments of the present invention;
FIG. 6-1 is a schematic diagram of a cross-sectional bit of a first image after reconstruction by an on operation according to an embodiment of the present invention;
FIG. 6-2 is a diagram illustrating a first image crown bit after the first image crown bit is reconstructed by the open operation according to the embodiment of the present invention;
6-3 are schematic diagrams of first image vector bits after ON operation reconstruction according to embodiments of the present invention;
6-4 are schematic diagrams of a cross-sectional bit of a second image after reconstruction by an on operation according to an embodiment of the present invention;
6-5 are schematic diagrams of second image coronal bits reconstructed by the opening operation according to the embodiment of the present invention;
6-6 are schematic diagrams of second image vector bits after ON reconstruction according to embodiments of the present invention;
FIG. 7-1 is a schematic illustration of a first region cross-sectional location of a first image after connected region analysis according to an embodiment of the present invention;
FIG. 7-2 is a schematic illustration of a first region coronal view of a first image after connected region analysis in accordance with an embodiment of the present invention;
FIG. 7-3 is a schematic illustration of first region vectors of a first image after connected region analysis according to an embodiment of the present invention;
7-4 are schematic diagrams of a second region cross-sectional position of the first image after connected region analysis according to embodiments of the present invention;
7-5 are schematic diagrams of the second region coronal phase of the first image after connected region analysis in accordance with embodiments of the present invention;
FIGS. 7-6 are schematic diagrams of the second region vector of the first image after connected region analysis according to embodiments of the present invention;
7-7 are schematic diagrams of a first region cross-sectional position of a second image after connected region analysis according to embodiments of the present invention;
7-8 are schematic illustrations of the first region coronal phase of the second image after connected region analysis in accordance with embodiments of the present invention;
FIGS. 7-9 are schematic diagrams of first region vectors of a second image after connected region analysis according to embodiments of the present invention;
FIGS. 7-10 are schematic illustrations of a second region cross-sectional location of a second image after connected region analysis in accordance with an embodiment of the present invention;
FIGS. 7-11 are schematic illustrations of a second region coronal view of a second image after connected region analysis in accordance with an embodiment of the present invention;
FIGS. 7-12 are schematic diagrams of second region vectors of a second image after connected region analysis according to embodiments of the present invention;
FIG. 8-1 is a schematic illustration of a first region cross-sectional location of a first image after shape restoration according to an embodiment of the present invention;
FIG. 8-2 is a schematic illustration of a first region coronal view of a first image after shape restoration according to an embodiment of the present invention;
FIG. 8-3 is a schematic illustration of the first region vectors of the first image after shape restoration according to an embodiment of the present invention;
8-4 are schematic diagrams of a second region cross-sectional location of the first image after shape restoration according to embodiments of the present invention;
8-5 are schematic illustrations of the second region coronal view of the first image after shape restoration according to embodiments of the present invention;
FIGS. 8-6 are schematic diagrams of the second region vectors of the first image after shape restoration according to embodiments of the present invention;
FIGS. 8-7 are schematic illustrations of a first region cross-sectional location of a second image after shape restoration according to embodiments of the present invention;
8-8 are schematic illustrations of the first region coronal view of the second image after shape restoration according to embodiments of the present invention;
FIGS. 8-9 are schematic diagrams of the first region vectors of the second image after shape restoration according to embodiments of the present invention;
FIGS. 8-10 are schematic illustrations of a second region cross-sectional location of a second image after shape restoration according to embodiments of the present invention;
8-11 are schematic illustrations of the second region coronal view of the second image after shape restoration according to embodiments of the present invention;
FIGS. 8-12 are schematic diagrams of the second region vectors of the second image after shape restoration according to embodiments of the present invention;
FIG. 9-1 is a schematic illustration of a left lung transverse position of a first image of an embodiment of the present invention;
FIG. 9-2 is a schematic illustration of the left lung coronal position of a first image of an embodiment of the present invention;
FIG. 9-3 is a schematic illustration of the left pulmonary sagittal location of the first image of an embodiment of the present invention;
FIGS. 9-4 are schematic diagrams of a right lung transverse position of a first image of an embodiment of the present invention;
FIGS. 9-5 are schematic diagrams of a right lung coronal view of a first image of an embodiment of the present invention;
FIGS. 9-6 are schematic diagrams of the right pulmonary vector position of the first image of an embodiment of the present invention;
FIGS. 9-7 are schematic diagrams of the left lung transverse position of the second image of an embodiment of the present invention;
FIGS. 9-8 are schematic illustrations of the left lung coronal position of a second image in accordance with embodiments of the present invention;
FIGS. 9-9 are schematic diagrams of the left pulmonary sagittal position of the second image according to embodiments of the present invention;
FIGS. 9-10 are schematic diagrams of a right lung transverse position of a second image of an embodiment of the present invention;
FIGS. 9-11 are schematic diagrams of the right lung coronal view of a second image in accordance with embodiments of the present invention;
FIGS. 9-12 are schematic diagrams of the right pulmonary vector position of the second image of an embodiment of the present invention;
FIG. 10-1 is a schematic illustration of the registration result of the left lung transverse position of the first image of an embodiment of the present invention;
FIG. 10-2 is a schematic representation of the registration result of the left lung coronal position of the first image of an embodiment of the present invention;
FIG. 10-3 is a schematic representation of the registration result of the left pulmonary vector of the first image according to an embodiment of the present invention;
10-4 are schematic illustrations of the registration results of the right lung transverse position of the first image of an embodiment of the present invention;
10-5 are schematic illustrations of the registration result of the right lung coronal position of the first image of an embodiment of the present invention;
FIGS. 10-6 are schematic diagrams of the registration result of the right pulmonary vector of the first image according to an embodiment of the present invention;
10-7 are schematic illustrations of the registration results of the left lung transverse position of the second image of an embodiment of the present invention;
FIGS. 10-8 are schematic illustrations of the registration results of the left lung coronal position of the second image of an embodiment of the present invention;
FIGS. 10-9 are schematic diagrams of the registration results of the left pulmonary vector of the second image according to an embodiment of the present invention;
10-10 are schematic illustrations of the registration results of the right lung transverse position of the second image of an embodiment of the present invention;
FIGS. 10-11 are schematic illustrations of the registration result of the right lung coronal position of the second image of an embodiment of the present invention;
FIGS. 10-12 are schematic diagrams of the registration result of the right pulmonary vector of the second image according to an embodiment of the present invention;
fig. 11 is a schematic diagram of an image matching apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method of matching images, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
Fig. 1 is a flowchart of an image matching method according to an embodiment of the present invention, as shown in fig. 1, the method including the steps of:
step S102, removing non-target areas in a plurality of target images to be matched, and determining target areas needing to be matched in the target images;
and step S104, matching the plurality of target images according to the target areas, and determining a matching image obtained by superposing the target areas of the plurality of target images under the condition that the superposition areas of the target areas of the plurality of target images are maximum.
Through the steps, non-target areas in a plurality of target images to be matched are removed, and target areas needing to be matched in the target images are determined; the method comprises the steps of matching a plurality of target images according to a target area, determining the position with the maximum overlapping area by matching the target areas of different target images in a mode of matching images overlapped by the target areas of the target images under the condition that the overlapping areas of the target images are maximum, determining the position with the maximum overlapping area, and obtaining the heaviest matching result of the target images.
The target image may be a computed tomography CT image, and the target region of the CT image may be a region of a target organ, such as a lung region including a left lung and a right lung. The non-target region includes a background of the CT image or a region of another organ to be scanned. Removing non-target areas in the CT images through an algorithm, matching the CT images aiming at the target areas, and performing a disk driver on a plurality of target images to generate matched images under the condition that the maximum overlapping area of the target areas is determined.
The target images can be images of the target region at different periods, namely CT images of the target organ at different periods, the CT images of the target organ are matched and overlapped to generate a matched image, so that the images of the target organ at the multiple periods are overlapped together for comparison, the change conditions of the target region at different periods are reflected more vividly, a doctor can read conveniently, the recognition and matching efficiency is improved, and the accuracy is improved.
Optionally, removing non-target areas in the multiple target images to be matched, and determining a target area that needs to be matched in the target image includes: performing threshold segmentation processing on the target image to generate a binary image of the target image; carrying out corrosion operation on the binary image of the target image to remove the adhesion of each part of image in the binary image; and removing non-target areas in the binary image in an area generation mode, and determining a target area of the target image.
The OTSU threshold segmentation algorithm may be to divide the histogram into two parts by using a threshold, and obtain an optimal threshold for segmentation when the variance of the divided two parts is the largest. The method has good segmentation effect when the difference between the CT value of the target and the CT value of the background is large, but sometimes the CT image quality is poor, the difference between the CT image quality and the CT image quality is not obvious, and the target and the background cannot be effectively separated by utilizing OTSU to perform threshold segmentation. In this embodiment, the method of combining the OTSU threshold segmentation algorithm with the specified threshold is adopted for segmentation, so that the effect of threshold segmentation on the CT image can be further improved.
The binary image obtained by the threshold segmentation includes air, target organs, such as lung parenchyma, trachea, background bed, and the like. Some target regions and non-target regions may be adhered, for example, the lung parenchyma and air may be adhered, which may cause the target region to be removed when the non-target region is removed, and therefore, it is necessary to perform an erosion operation on the binary image after the threshold segmentation, specifically, the binary image is eroded by using a circular template with a radius r to remove the adhesion between the target region and the non-target region.
The target region and the non-target region in the de-agglutinated binary image are basically separated, the non-target region can be removed through a region growing algorithm, only the target region is reserved, specifically, four edges of a first layer of images of three angles of a CT image are used as seed points to perform region growing, the CT image generally performs tomography on the same target object, and images of three angles are generated and respectively comprise a transverse position, a coronal position and a vector position.
Optionally, the threshold segmentation processing is performed on the target image, and generating a binary image of the target image includes: performing threshold segmentation on the target image through a threshold segmentation algorithm to determine an optimal threshold of the target image; judging whether the optimal threshold is within a preset threshold range; under the condition that the optimal threshold is within a preset threshold range, performing threshold segmentation on the target image according to the optimal threshold to generate a binary image; and under the condition that the optimal threshold is not in the preset threshold range, arbitrarily appointing a value in the preset threshold range as the optimal threshold, and generating a binary image according to the appointed optimal threshold.
Specifically, an OTSU threshold segmentation algorithm and a specified threshold are combined for segmentation, and the OTSU is used for performing threshold segmentation on the three-dimensional CT image to obtain an optimal threshold T; judging whether T is in the range of (-1024 to-400) of the preset CT value of the lung parenchyma; and if the T is not within the preset CT value of the lung parenchyma, the preset Tmin-1024 and Tmax-400 are appointed to carry out threshold segmentation on the CT image, and finally a binary image with a background separated from the target is obtained.
Optionally, before performing an erosion operation on the binary image of the target image to remove adhesion of each part of the image in the binary image, the method further includes: performing down-sampling processing on the binary image of the target image; and carrying out hole filling processing on the binary image after the down-sampling processing.
In the embodiment, only the target area needs to be roughly segmented, and the down-sampling operation is performed on the binarized image after threshold segmentation, so that the registration precision is not influenced, and the program running time can be reduced.
Since the binary image is subjected to corrosion operation in the next step, in order to avoid that the target area cannot be restored due to excessive corrosion, the binary image is subjected to hole filling before the corrosion operation, so that the corrosion resistance of the target area is improved.
Optionally, matching the multiple target images according to the target region, and determining that the overlapping areas of the target regions of the multiple target images are the largest, where the matching image obtained by superimposing the target regions of the multiple target images includes: determining a plurality of components of a target area of each target image; and matching the plurality of target images according to the plurality of components, and determining a matching image obtained by superposing the target regions of the plurality of target images under the condition that the superposition areas of the target regions of the plurality of target images are maximum.
The target region may include a plurality of components, for example, components of a lung region including a left lung and a right lung. The target image is matched according to the plurality of components, that is, the components of the target image are matched with the components of other target images, for example, the left lung of the target image is matched with the left lungs of other target images, and the right lung of the target image is matched with the right lungs of other target images. And under the condition that the overlapping areas of the target areas of different target images are the largest, overlapping the target areas of the plurality of target images to determine a matching image. Therefore, images of the target visceral organs in multiple periods are overlapped together for comparison, the change conditions of the target region in different periods are reflected more vividly, doctors can read conveniently, the recognition and matching efficiency is improved, and the accuracy is improved.
The method comprises the steps of matching a plurality of target images according to a plurality of components, determining that the coincidence area of the target regions of the plurality of target images is the largest, and matching the plurality of components of the target regions one by one, wherein the target regions are lung regions, the components of the lung regions comprise a left lung and a right lung, matching the left lung first, determining that the coincidence area of the left lung of the target image and the left lungs of other target images is the largest, matching the right lung after the registration of the left lung is finished, and determining that the coincidence area of the right lung of the target image and the right lungs of the other target images is the largest.
In the registration process of the components, multiple operations of scaling, rotation, translation and the like can be performed in sequence, and areas of the corresponding components of the target areas of different target images are made to be in the same grade through scaling, so that the components are comparable. And then, rotating and translating to find the position with the maximum overlapping area of the corresponding components of the target area of different target images.
For example, in the left lung matching process, the bounding box of the left lung of the first image and the bounding box of the left lung of the second image are used to calculate the scaling, so as to obtain a transformation matrix parameter as an initial parameter; on the basis of the initial parameters, performing rotation operation around the image center within a preset radian range in advance, calculating the overlapping area of the left lung of the first image and the left lung of the second image, and selecting the conversion matrix parameter when the areas are maximally overlapped as an intermediate parameter; on the basis of the intermediate parameters, carrying out translation operation between translation ranges preset in advance, calculating the overlapping area of the left lung of the first image and the right lung of the second image, and selecting a conversion matrix parameter when the areas are overlapped maximally as a final parameter; the registration operation of the right lung is the same as that described above, and the images of the same patient at different periods can be the first image and the second image respectively, so as to obtain four transformation matrix parameters in total and store the results. Up to this point, the present embodiment generates a final matching image using the final conversion matrix parameters.
Optionally, determining a plurality of components of the target region of each target image includes: performing region communication processing on a target region of a target image, and determining a plurality of communication regions of the target region of the target image; and respectively determining the corresponding components of the plurality of connected areas.
Carrying out region communication processing on a target region of a target image to obtain a plurality of connected regions, calculating the size of the connected regions through a bounding box, comparing the size with the size of a preset component, and determining the connected regions as the components under the condition of meeting the conditions.
Optionally, respectively determining the components corresponding to the multiple connected regions includes: determining bounding boxes for a plurality of connected regions; determining the size relation of the sizes of the bounding boxes of the plurality of connected areas under the condition that the sizes of the bounding boxes of the plurality of connected areas do not exceed a preset size threshold; and under the condition that the size relation of the dimensions of the bounding box is the same as the size relation of the dimensions of the real multiple components, determining that the real components corresponding to the bounding box correspond to the connected region corresponding to the bounding box.
The size of the bounding boxes in the plurality of connected regions does not exceed a preset size threshold, namely the size of the connected regions is not larger than the size of the largest component, and the preset size threshold can be the size of the largest component, so that the condition that analysis errors are caused because the connected regions comprise a plurality of different components is prevented.
In the case where the size relationship of the dimensions of the bounding box is the same as the size relationship of the dimensions of the real plurality of components, it is determined that the real component corresponding to the bounding box corresponds to the connected region corresponding to the bounding box, for example, the target region is a lung region, and the components are the left lung and the right lung.
Selecting the largest and the second largest connected areas in the plurality of connected areas, and recording the largest and the second largest connected areas as a first area and a second area; and calculating a bounding box of the first region, wherein if the width of the bounding box meets a size condition, the first region and the second region are respectively a left lung parenchyma image and a right lung parenchyma image, wherein the size of the left lung is larger than that of the right lung in advance.
Optionally, performing region connectivity processing on the target region of the target image, and determining a plurality of connected regions of the target region of the target image includes: carrying out eight-connected processing on a target area of a target image, and determining a plurality of connected areas and bounding boxes of the connected areas; determining whether the size of the bounding box of the connected region meets a preset condition, and abandoning the connected region under the condition that the size of the bounding box does not meet the preset condition; reserving a communication area under the condition that the size of the bounding box meets a preset condition; and determining whether the size of the surrounding box of the connected region exceeds a preset size threshold, and performing corrosion operation on the connected region and performing eight-connection processing to determine a plurality of connected regions under the condition that the size of the surrounding box exceeds the preset size threshold.
The preset condition may be that the width of the bounding box > a first width value preset in advance, or the width-to-height ratio of the bounding box > an aspect-to-height ratio preset in advance, or the minimum value of the height of the bounding box > half of the image height, and then the connected domain value on the binary image is set to 0, that is, the connected region is discarded, which is similar to the removed non-target region.
And under the condition that the communication area is reserved, determining whether the size of the enclosure of the communication area exceeds a preset size threshold, and under the condition that the size of the enclosure exceeds the preset size threshold, indicating that the communication area comprises a plurality of different components, so that the corrosion operation is carried out on the communication area again, the eight-communication treatment is carried out, and a plurality of communication areas are determined again.
Optionally, performing region connectivity processing on the target region of the target image, and before determining a plurality of connected regions of the target region of the target image, further includes: and performing open operation reconstruction processing on the target area of the target image.
The open operation reconstruction can correctly restore the shape of the object remained after corrosion, fill holes and remove all targets connected with the image boundary, so the open operation reconstruction is carried out at the step, and the segmentation effect of the lung parenchyma can be effectively improved.
Optionally, after performing region connectivity processing on the target region of the target image and determining a plurality of connected regions of the target region of the target image, the method further includes: performing expansion processing on a target area of the target image after the area communication processing; and according to the initial target image after the down-sampling processing and the cavity filling, performing shape restoration on the target image after the expansion processing.
Since the erosion operation is performed during connected domain analysis, which affects the size and shape of the lung parenchymal region, the image is expanded, and then the expanded image and the binary image after threshold segmentation are down-sampled and subjected to hole filling processing, and then the logical and operation is performed on the image to complete shape restoration.
It should be noted that this embodiment also provides an alternative implementation, which is described in detail below.
The morbidity and mortality of the lung cancer in China are the first of various malignant tumors, and have great threats to the health of residents. Clinical findings show that the cure rate of early lung cancer is up to more than 90%, and early detection of abnormal conditions of the lung can control the disease condition and reduce the death rate. The early detection and diagnosis of the lung nodules (with the diameter of 3-30 mm) have very important significance for early diagnosis and treatment of lung cancer, the lung nodules are effectively distinguished and diagnosed, the benign and malignant properties of the lung nodules are quickly determined, the malignant nodules are removed as early as possible, meanwhile, unnecessary over-treatment is avoided, and the method is the key for diagnosis and treatment of the lung nodules. Therefore, follow-up comparison of pulmonary nodules of patients is performed, and observation of growth and benign and malignant changes of the nodules is important for timely treatment of the pulmonary nodules.
In the actual diagnosis and treatment process, the doctor is influenced by factors such as medical instrument interference, lung respiration and growth, and the like, so that the difficulty of observing the change of the lung nodules by the doctor through the lung CT image is increased. Therefore, a fast and accurate image registration algorithm can help a doctor to quickly and accurately locate the position of the pulmonary nodule in different period of examination of the same patient, and the detection efficiency and accuracy of the doctor are improved.
Because of the influences of medical instrument interference, lung respiration, growth and other factors, the forms of lung CT images obtained by the same patient at different moments can have larger difference, and the difficulty of a doctor in lung disease follow-up visit is increased, so that a lung CT image registration method based on a minimum bounding box is provided, a first image (namely a fixed image which is kept in a fixed position in registration) and a second image (namely a moving image which is moved in registration to determine a final registration position) are segmented to obtain left and right lung parenchyma binary images and bounding boxes (bounding boxes) of the first image and the second image, then TransformMatrix parameters of initial registration of the left and right lung parenchyma are respectively calculated by using the bounding boxes of the two images, then the initial registration images are finely adjusted (rotated and translated) in a small range to obtain TransformMatrix parameters when a registration region is the largest, and completing lung registration. The Transform Matrix is a function of element transformation in CCS, and is a substantial method for transforming an image by a Matrix.
Fig. 2 is a flowchart of a lung CT image registration method according to an embodiment of the present invention, and as shown in fig. 2, the present embodiment provides a lung CT image registration method based on a minimum bounding box, which is as follows:
1. threshold segmentation
The basic principle of the OTSU threshold segmentation algorithm is: and dividing the histogram into two parts by using a threshold value, and obtaining the optimal threshold value for dividing when the variance of the divided two parts is maximum. The method has good segmentation effect when the difference between the CT value of the target and the CT value of the background is large, but sometimes the CT quality is poor, the difference between the CT value and the CT value is not obvious, and the target and the background cannot be effectively separated by utilizing OTSU to perform threshold segmentation. Therefore, the method adopts a method of combining the OTSU and the specified threshold value to carry out segmentation. The OTSU algorithm is an efficient algorithm for binarizing a medium team of images proposed by OTSU of Japanese scholars, and comprises the following specific steps:
(1) performing threshold segmentation on the three-dimensional CT image by using the OTSU to obtain an optimal threshold T;
(2) judging whether T is in the range of the lung parenchyma CT value (-1024 to-400);
(3) if the image is directly processed in the next step, if not, the Tmin is specified to be-1024, and the Tmax is specified to be-400 for threshold segmentation, and finally a binary image with the background separated from the target is obtained, as shown in fig. 3-1 to fig. 3-6.
FIG. 3-1 is a schematic diagram of a cross-sectional position of a first image of a lung after thresholding in accordance with an embodiment of the present invention; FIG. 3-2 is a schematic representation of a coronal view of a first image of a lung after thresholding in accordance with an embodiment of the present invention; 3-3 are schematic diagrams of a first image vector of a lung after thresholding in accordance with an embodiment of the invention; 3-4 are schematic illustrations of a cross-sectional location of a second image of a lung after thresholding in accordance with an embodiment of the invention; 3-5 are schematic illustrations of a coronal elevation of a second image of a lung after thresholding in accordance with an embodiment of the present invention; fig. 3-6 are schematic diagrams of a second image vector of the lung after thresholding in accordance with an embodiment of the invention.
2. Down sampling
Because the method calculates the Transform Matrix through the bounding box for registration, only the lung needs to be roughly segmented, and therefore the down-sampling operation is carried out on the binary image after threshold segmentation, the registration precision is not influenced, and the program running time can be reduced.
3. Hole filling
Since the image is then eroded too much to recover, the image is filled with holes before the erosion operation.
Obtaining the images after the down-sampling and the hole filling, as shown in fig. 4-1 to 4-6, wherein fig. 4-1 is a schematic diagram of the cross-sectional position of the first image after the down-sampling and the hole filling according to the embodiment of the present invention; FIG. 4-2 is a schematic diagram of a first image crown after downsampling and hole filling according to an embodiment of the present invention; 4-3 are schematic diagrams of first image vectors after downsampling and hole filling according to embodiments of the present invention; 4-4 are schematic diagrams of a second image cross-sectional bit after down-sampling and hole filling according to embodiments of the present invention; 4-5 are schematic diagrams of a second image crown after downsampling and hole filling according to embodiments of the present invention; fig. 4-6 are schematic diagrams of the second image vector after downsampling and hole filling according to an embodiment of the present invention.
4. Removal of non-pulmonary regions
After the steps 1-3, a binary image with air, lung parenchyma, trachea and background bed as the foreground is obtained, and sometimes the lung parenchyma is adhered to the air, so that the lung parenchyma is removed when the region grows, therefore, the image is corroded to remove the adhesion, and then the non-lung region is removed by the region growing. The method comprises the following specific steps:
(1) corroding the binary image by using a circular template with the radius r;
(2) and taking four edges of the first layer of the three-dimensional image as seed points to carry out region growth.
Obtaining an image with the non-lung region removed, as shown in fig. 5-1 to 5-6, wherein fig. 5-1 is a schematic diagram of a cross-sectional position of the first image after the non-lung region is removed according to the embodiment of the present invention; FIG. 5-2 is a schematic illustration of a coronal view of a first image after removal of a non-lung region according to an embodiment of the present invention; 5-3 are schematic diagrams of a first image vector after removal of the non-lung region according to an embodiment of the present invention; FIGS. 5-4 are schematic illustrations of a second image cross-sectional location after removal of a non-lung region in accordance with an embodiment of the present invention; FIGS. 5-5 are schematic illustrations of a coronal view of a second image after removal of a non-lung region in accordance with an embodiment of the present invention; FIGS. 5-6 are schematic diagrams of a second image vector after removal of the non-lung region according to embodiments of the present invention.
5. Open operation reconstruction
The open operation reconstruction can correctly restore the shape of the object remained after corrosion, fill holes and remove all targets connected with the image boundary, so the open operation reconstruction is carried out at the step, and the segmentation effect of the lung parenchyma can be effectively improved.
Obtaining an image after the opening operation reconstruction, as shown in fig. 6-1 to 6-6, wherein fig. 6-1 is a schematic diagram of a cross-sectional position of a first image after the opening operation reconstruction according to the embodiment of the present invention; FIG. 6-2 is a diagram illustrating a first image crown bit after the first image crown bit is reconstructed by the open operation according to the embodiment of the present invention; 6-3 are schematic diagrams of first image vector bits after ON operation reconstruction according to embodiments of the present invention; 6-4 are schematic diagrams of a cross-sectional bit of a second image after reconstruction by an on operation according to an embodiment of the present invention; 6-5 are schematic diagrams of second image coronal bits reconstructed by the opening operation according to the embodiment of the present invention; fig. 6-6 are schematic diagrams of the second image vector bits after the on-operation reconstruction according to the embodiment of the invention.
6. Connected component analysis
After the above operation, sometimes the lung region still has the large trachea, the image registration result is obtained, and the method herein registers the left and right lung parenchyma separately, so it is necessary to remove the trachea and obtain the Component regions (Component) of the left and right lung. The method comprises the following specific steps:
(1) carrying out 8-connection marking on the three-dimensional image;
(2) calculating bounding boxes of all connected domains;
(3) setting the connected domain value to 0 if the width of the bounding box > a first width value (e.g., cWholeLungWidth _ upperLimit) preset in advance or the width-to-height ratio of the bounding box > an aspect-to-height ratio (e.g., cWidthHeightratio) preset in advance or the minimum value of the height of the bounding box > half of the image height, otherwise, keeping the connected domain;
(4) performing connected domain analysis on the image obtained in the step (3) again;
(5) selecting a maximum and a second large communication area, and recording the maximum and the second large communication areas as a first area (Component1) and a second area (Component 2);
(6) calculating a bounding box of the first region, and if the width of the bounding box is larger than a second width value (for example, cWholeLungWidth _ lowerLimit) preset in advance, which indicates that the left lung parenchyma and the right lung parenchyma are not separated due to the influence of the trachea, performing (7), otherwise, obtaining the first region and the second region which are the left lung parenchyma image and the right lung parenchyma image respectively in (5);
(7) and carrying out corrosion operation on the first area, then carrying out 8-communication marking, and selecting the largest and second large communication areas as the first area and the second area.
Obtaining an image after analyzing the connected component, as shown in fig. 7-1 to 7-12, fig. 7-1 is a schematic diagram of a cross-sectional position of the first region of the first image after analyzing the connected component according to the embodiment of the present invention; FIG. 7-2 is a schematic illustration of a first region coronal view of a first image after connected region analysis in accordance with an embodiment of the present invention; FIG. 7-3 is a schematic illustration of first region vectors of a first image after connected region analysis according to an embodiment of the present invention; 7-4 are schematic diagrams of a second region cross-sectional position of the first image after connected region analysis according to embodiments of the present invention; 7-5 are schematic diagrams of the second region coronal phase of the first image after connected region analysis in accordance with embodiments of the present invention; FIGS. 7-6 are schematic diagrams of the second region vector of the first image after connected region analysis according to embodiments of the present invention; 7-7 are schematic diagrams of a first region cross-sectional position of a second image after connected region analysis according to embodiments of the present invention; 7-8 are schematic illustrations of the first region coronal phase of the second image after connected region analysis in accordance with embodiments of the present invention; FIGS. 7-9 are schematic diagrams of first region vectors of a second image after connected region analysis according to embodiments of the present invention; FIGS. 7-10 are schematic illustrations of a second region cross-sectional location of a second image after connected region analysis in accordance with an embodiment of the present invention; FIGS. 7-11 are schematic illustrations of a second region coronal view of a second image after connected region analysis in accordance with an embodiment of the present invention; FIGS. 7-12 are schematic diagrams of the second region vector of the second image after connected region analysis according to embodiments of the present invention.
7. Shape restoration
Since the size and shape of the lung parenchymal region are affected by the erosion operation during the connected domain analysis, the image is expanded, then the expanded image and the steps 2 and 3 are subjected to down-sampling, and the image is subjected to hole filling and operation to complete shape restoration.
Obtaining a shape-restored image, as shown in fig. 8-1 to 8-12, wherein fig. 8-1 is a schematic diagram of a cross-sectional position of the first region of the shape-restored first image according to the embodiment of the present invention; FIG. 8-2 is a schematic illustration of a first region coronal view of a first image after shape restoration according to an embodiment of the present invention; FIG. 8-3 is a schematic illustration of the first region vectors of the first image after shape restoration according to an embodiment of the present invention; 8-4 are schematic diagrams of a second region cross-sectional location of the first image after shape restoration according to embodiments of the present invention; 8-5 are schematic illustrations of the second region coronal view of the first image after shape restoration according to embodiments of the present invention; FIGS. 8-6 are schematic diagrams of the second region vectors of the first image after shape restoration according to embodiments of the present invention; FIGS. 8-7 are schematic illustrations of a first region cross-sectional location of a second image after shape restoration according to embodiments of the present invention; 8-8 are schematic illustrations of the first region coronal view of the second image after shape restoration according to embodiments of the present invention; FIGS. 8-9 are schematic diagrams of the first region vectors of the second image after shape restoration according to embodiments of the present invention; FIGS. 8-10 are schematic illustrations of a second region cross-sectional location of a second image after shape restoration according to embodiments of the present invention; 8-11 are schematic illustrations of the second region coronal view of the second image after shape restoration according to embodiments of the present invention; fig. 8-12 are schematic diagrams of the second region vectors of the second image after shape restoration according to embodiments of the present invention.
8. Determining the left and right lung
The method is to register the left and right lung parenchyma separately, so it is necessary to know which of the first and second regions represents the left lung and which represents the right lung to complete the registration. The method comprises the following specific steps:
(1) calculating bounding boxes of the first region and the second region;
(2) if the bounding box widths of the first region and the second region are larger than a third width value (for example, m _ whorlengwidth _ lowerLimit) preset in advance, indicating that the lung parenchyma segmentation fails, otherwise, performing (3);
(3) if the width maximum of the bounding box of the first region is less than the width maximum of the bounding box of the second region, the first region is the right lung and the second region is the left lung, otherwise the first region is the left lung and the second region is the right lung. Here, since the left and right lungs are generally determined based on whether the maximum width of the bounding box of the first region is smaller than the maximum width of the bounding box of the second region, the conditions may be preset, for example, the relationship between the left and right lungs of a person is determined, and then the left and right lungs are determined based on the maximum width of the bounding box based on the determined relationship between the left and right lungs of the person.
Obtaining images of the left and right lungs as shown in fig. 9-1 through 9-12, wherein fig. 9-1 is a schematic diagram of the left lung transverse position of the first image of an embodiment of the present invention; FIG. 9-2 is a schematic illustration of the left lung coronal position of a first image of an embodiment of the present invention; FIG. 9-3 is a schematic illustration of the left pulmonary sagittal location of the first image of an embodiment of the present invention; FIGS. 9-4 are schematic diagrams of a right lung transverse position of a first image of an embodiment of the present invention; FIGS. 9-5 are schematic diagrams of a right lung coronal view of a first image of an embodiment of the present invention; FIGS. 9-6 are schematic diagrams of the right pulmonary vector position of the first image of an embodiment of the present invention; FIGS. 9-7 are schematic diagrams of the left lung transverse position of the second image of an embodiment of the present invention; FIGS. 9-8 are schematic illustrations of the left lung coronal position of a second image in accordance with embodiments of the present invention; FIGS. 9-9 are schematic diagrams of the left pulmonary sagittal position of the second image according to embodiments of the present invention; FIGS. 9-10 are schematic diagrams of a right lung transverse position of a second image of an embodiment of the present invention; FIGS. 9-11 are schematic diagrams of the right lung coronal view of a second image in accordance with embodiments of the present invention; fig. 9-12 are schematic diagrams of the right pulmonary vector position of the second image of an embodiment of the present invention.
9. Pulmonary parenchymal registration
And respectively obtaining a left lung parenchyma binary image and a right lung parenchyma binary image and a bounding box of the first image and a left lung parenchyma binary image and a right lung parenchyma binary image and a bounding box of the second image through 1-8 steps of operation. The following describes the registration algorithm process by taking the left lung as an example, which is as follows:
(1) calculating scaling by using the bounding box of the left lung of the first image and the bounding box of the left lung of the second image to obtain an initial Transform Matrix parameter (initial Parameters);
(2) on the basis of the initial Parameters, rotating around the image center within a preset radian range in advance, calculating the overlapping area of the left lung of the first image and the left lung of the second image, and selecting TransformMatrix Parameters when the areas are maximally overlapped as intermediate Parameters (second Parameters);
(3) on the basis of the intermediate Parameters, carrying out translation operation between translation ranges preset in advance, calculating the overlapping area of the left lung of the first image and the right lung of the second image, and selecting a Transform Matrix parameter when the areas are maximally overlapped as a final parameter (final Parameters);
the registration operation of the right lung is the same as that described above, and the images of the same patient in different periods can be the first image and the second image respectively, so as to obtain four Transform Matrix parameters in total and store the results. Thus, the present embodiment completes registration using the final Transform Matrix parameters.
Obtaining the final registration result, as shown in fig. 10-1 to fig. 10-12, representing the difference region of the overlapping region of different images by using different grayscales, where the region with lighter grayscale is the difference region of the first image and the overlapping region, and the region with deeper grayscale is the difference region of the second image and the overlapping region, where fig. 10-1 is a schematic diagram of the registration result of the cross-section of the left lung of the first image according to the embodiment of the present invention; FIG. 10-2 is a schematic representation of the registration result of the left lung coronal position of the first image of an embodiment of the present invention; FIG. 10-3 is a schematic representation of the registration result of the left pulmonary vector of the first image according to an embodiment of the present invention; 10-4 are schematic illustrations of the registration results of the right lung transverse position of the first image of an embodiment of the present invention; 10-5 are schematic illustrations of the registration result of the right lung coronal position of the first image of an embodiment of the present invention; FIGS. 10-6 are schematic diagrams of the registration result of the right pulmonary vector of the first image according to an embodiment of the present invention; 10-7 are schematic illustrations of the registration results of the left lung transverse position of the second image of an embodiment of the present invention; FIGS. 10-8 are schematic illustrations of the registration results of the left lung coronal position of the second image of an embodiment of the present invention; FIGS. 10-9 are schematic diagrams of the registration results of the left pulmonary vector of the second image according to an embodiment of the present invention; 10-10 are schematic illustrations of the registration results of the right lung transverse position of the second image of an embodiment of the present invention; FIGS. 10-11 are schematic illustrations of the registration result of the right lung coronal position of the second image of an embodiment of the present invention; fig. 10-12 are schematic diagrams of the registration result of the right pulmonary vector of the second image according to the embodiment of the invention.
Fig. 11 is a schematic diagram of an image matching apparatus according to an embodiment of the present invention, and as shown in fig. 11, according to another aspect of the embodiment of the present invention, there is also provided an image matching apparatus including: a determination module 112 and a matching module 114, which are described in detail below.
A determining module 112, configured to remove non-target regions in multiple target images to be matched, and determine a target region that needs to be matched in the target image; and a matching module 114, connected to the determining module 112, for matching the plurality of target images according to the target regions, and determining a matching image obtained by superimposing the target regions of the plurality of target images when the overlapping areas of the target regions of the plurality of target images are the largest.
By the device, the determination module 112 is adopted to remove non-target areas in a plurality of target images to be matched, and target areas needing to be matched in the target images are determined; the matching module 114 matches the multiple target images according to the target areas, determines the position with the maximum overlapping area by matching the target areas of the multiple target images in a manner of matching images overlapped by the target areas of the multiple target images under the condition that the overlapping areas of the target areas of the multiple target images are maximum, is the heaviest and matched result of the multiple target images, and achieves the purpose of effectively determining the difference of the target areas of the multiple target images and facilitating comparison and observation by overlapping the multiple target areas, thereby achieving the purposes of improving the recognition and matching efficiency and improving the technical effect of the accuracy rate, and further solving the technical problems that an observer needs to autonomously recognize and match the visceral organs and parts in CT images at different periods in the related technology, and the problems are completely dependent on personal experience, low efficiency and low accuracy rate.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the image matching method of any one of the above.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to run a program, where the program executes a matching method for an image according to any one of the above.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A method for matching images, comprising:
removing non-target areas in a plurality of target images to be matched, and determining target areas needing to be matched in the target images;
and matching the target images according to the target area, and determining a matching image obtained by superposing the target areas of the target images under the condition that the superposition areas of the target images are maximum.
2. The method of claim 1, wherein non-target regions in a plurality of target images to be matched are removed, and determining the target region in the target image that needs to be matched comprises:
performing threshold segmentation processing on the target image to generate a binary image of the target image;
carrying out corrosion operation on the binary image of the target image to remove the adhesion of each part of image in the binary image;
and removing non-target areas in the binary image in an area generation mode, and determining the target area of the target image.
3. The method of claim 2, wherein thresholding the target image to generate a binary image of the target image comprises:
performing threshold segmentation on the target image through a threshold segmentation algorithm to determine an optimal threshold of the target image;
judging whether the optimal threshold is within a preset threshold range;
under the condition that the optimal threshold is within the preset threshold range, performing threshold segmentation on the target image according to the optimal threshold to generate the binary image;
and under the condition that the optimal threshold is not in the preset threshold range, arbitrarily appointing a value in the preset threshold range as the optimal threshold, and generating the binary image according to the appointed optimal threshold.
4. The method according to claim 2, wherein before performing an erosion operation on the binary image of the target image to remove the adhesion of each part of the binary image, the method further comprises:
performing down-sampling processing on the binary image of the target image;
and carrying out hole filling processing on the binary image after the down-sampling processing.
5. The method according to claim 1, wherein when matching the plurality of target images according to the target region and determining that the overlapping areas of the target regions of the plurality of target images are the largest, the matching image obtained by overlapping the target regions of the plurality of target images comprises:
determining a plurality of components of the target region for each of the target images;
and matching the plurality of target images according to the plurality of components, and determining a matching image obtained by superposing the target regions of the plurality of target images under the condition that the superposition areas of the target regions of the plurality of target images are maximum.
6. The method of claim 5, wherein determining a plurality of components of the target region for each of the target images comprises:
performing region communication processing on the target region of the target image, and determining a plurality of communication regions of the target region of the target image;
and respectively determining the corresponding components of the plurality of connected areas.
7. The method of claim 6, wherein determining the corresponding components of the plurality of connected regions respectively comprises:
determining a bounding box for a plurality of the connected regions;
determining the size relation of the sizes of the bounding boxes of the plurality of the connected regions under the condition that the sizes of the bounding boxes of the plurality of the connected regions do not exceed a preset size threshold;
and under the condition that the size relation of the dimensions of the bounding box is the same as the size relation of the dimensions of the real components, determining that the real components corresponding to the bounding box correspond to the connected regions corresponding to the bounding box.
8. The method of claim 6, wherein performing region connectivity processing on the target region of the target image, and wherein determining a plurality of connected regions of the target region of the target image comprises:
carrying out eight-connected processing on the target area of the target image, and determining a plurality of connected areas and bounding boxes of the connected areas;
determining whether the size of the bounding box of the connected region meets a preset condition, and discarding the connected region under the condition that the size of the bounding box does not meet the preset condition; reserving the communication area under the condition that the size of the bounding box meets a preset condition;
and determining whether the size of the bounding box of the communication area exceeds a preset size threshold, and performing corrosion operation on the communication area to perform eight-communication processing to determine a plurality of communication areas under the condition that the size of the bounding box exceeds a second preset size threshold.
9. The method of claim 8, wherein performing region connectivity processing on the target region of the target image further comprises, before determining a plurality of connected regions of the target region of the target image:
and performing open operation reconstruction processing on the target region of the target image.
10. The method according to claim 8, wherein performing a region connectivity process on the target region of the target image, and after determining a plurality of connected regions of the target region of the target image, further comprises:
performing expansion processing on the target area of the target image after the area communication processing;
and according to the initial target image after the down-sampling processing and the cavity filling, performing shape restoration on the target image after the expansion processing.
11. An apparatus for matching images, comprising:
the device comprises a determining module, a matching module and a matching module, wherein the determining module is used for removing non-target areas in a plurality of target images to be matched and determining target areas needing to be matched in the target images;
and the matching module is used for matching the target images according to the target areas and determining a matching image obtained by superposing the target areas of the target images under the condition that the superposition areas of the target images are the largest.
12. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the image matching method according to any one of claims 1 to 10.
13. A processor, characterized in that the processor is configured to run a program, wherein the program is configured to execute the image matching method according to any one of claims 1 to 10 when running.
CN202010677668.5A 2020-07-14 2020-07-14 Image matching method and device Pending CN111898657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010677668.5A CN111898657A (en) 2020-07-14 2020-07-14 Image matching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010677668.5A CN111898657A (en) 2020-07-14 2020-07-14 Image matching method and device

Publications (1)

Publication Number Publication Date
CN111898657A true CN111898657A (en) 2020-11-06

Family

ID=73191240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010677668.5A Pending CN111898657A (en) 2020-07-14 2020-07-14 Image matching method and device

Country Status (1)

Country Link
CN (1) CN111898657A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884792A (en) * 2021-02-02 2021-06-01 青岛海信医疗设备股份有限公司 Lung image segmentation method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727666A (en) * 2008-11-03 2010-06-09 深圳迈瑞生物医疗电子股份有限公司 Image segmentation method and device, and method for judging image inversion and distinguishing front side and back side of sternum
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108257120A (en) * 2018-01-09 2018-07-06 东北大学 A kind of extraction method of the three-dimensional liver bounding box based on CT images
CN108537784A (en) * 2018-03-30 2018-09-14 四川元匠科技有限公司 A kind of CT figure pulmonary nodule detection methods based on deep learning
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN111127482A (en) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 CT image lung trachea segmentation method and system based on deep learning
CN111369623A (en) * 2020-02-27 2020-07-03 复旦大学 Lung CT image identification method based on deep learning 3D target detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101727666A (en) * 2008-11-03 2010-06-09 深圳迈瑞生物医疗电子股份有限公司 Image segmentation method and device, and method for judging image inversion and distinguishing front side and back side of sternum
CN108257120A (en) * 2018-01-09 2018-07-06 东北大学 A kind of extraction method of the three-dimensional liver bounding box based on CT images
CN108053417A (en) * 2018-01-30 2018-05-18 浙江大学 A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN108537784A (en) * 2018-03-30 2018-09-14 四川元匠科技有限公司 A kind of CT figure pulmonary nodule detection methods based on deep learning
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN111127482A (en) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 CT image lung trachea segmentation method and system based on deep learning
CN111369623A (en) * 2020-02-27 2020-07-03 复旦大学 Lung CT image identification method based on deep learning 3D target detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884792A (en) * 2021-02-02 2021-06-01 青岛海信医疗设备股份有限公司 Lung image segmentation method and device, electronic equipment and storage medium
CN112884792B (en) * 2021-02-02 2022-10-25 青岛海信医疗设备股份有限公司 Lung image segmentation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Yang et al. Segmentation of liver and vessels from CT images and classification of liver segments for preoperative liver surgical planning in living donor liver transplantation
Soler et al. Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery
Rusko et al. Fully automatic liver segmentation for contrast-enhanced CT images
Soler et al. An automatic virtual patient reconstruction from CT-scans for hepatic surgical planning
CN108305255B (en) Generation device of liver surgery cutting surface
Zheng et al. Multi-part modeling and segmentation of left atrium in C-arm CT for image-guided ablation of atrial fibrillation
Campadelli et al. A segmentation framework for abdominal organs from CT scans
CN109300113B (en) Pulmonary nodule auxiliary detection system and method based on improved convex hull method
CN113112609A (en) Navigation method and system for lung biopsy bronchoscope
US20210142485A1 (en) Image analysis system for identifying lung features
CN103325143A (en) Mark point automatic registration method based on model matching
CN107067398A (en) Complementing method and device for lacking blood vessel in 3 D medical model
CN113409456B (en) Modeling method, system, device and medium for three-dimensional model before craniocerebral puncture operation
Leventić et al. Left atrial appendage segmentation from 3D CCTA images for occluder placement procedure
Chang et al. 3-D snake for US in margin evaluation for malignant breast tumor excision using mammotome
Soler et al. Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery
Jimenez-Carretero et al. Optimal multiresolution 3D level-set method for liver segmentation incorporating local curvature constraints
CN111161241A (en) Liver image identification method, electronic equipment and storage medium
Skalski et al. Kidney segmentation in ct data using hybrid level-set method with ellipsoidal shape constraints
CN111145226B (en) Three-dimensional lung feature extraction method based on CT image
Gao et al. Accurate lung segmentation for X-ray CT images
CN111311626A (en) Skull fracture automatic detection method based on CT image and electronic medium
CN111898657A (en) Image matching method and device
Li et al. Integrating FCM and level sets for liver tumor segmentation
John et al. Automatic left atrium segmentation by cutting the blood pool at narrowings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination