CN114549320A - Automatic and accurate splicing method, device, equipment and medium for magnetic resonance full spine image - Google Patents
Automatic and accurate splicing method, device, equipment and medium for magnetic resonance full spine image Download PDFInfo
- Publication number
- CN114549320A CN114549320A CN202210178574.2A CN202210178574A CN114549320A CN 114549320 A CN114549320 A CN 114549320A CN 202210178574 A CN202210178574 A CN 202210178574A CN 114549320 A CN114549320 A CN 114549320A
- Authority
- CN
- China
- Prior art keywords
- image
- spliced
- registration
- images
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 239000011159 matrix material Substances 0.000 claims abstract description 91
- 238000005070 sampling Methods 0.000 claims abstract description 15
- 230000015654 memory Effects 0.000 claims description 20
- 238000007781 pre-processing Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 6
- 238000007499 fusion processing Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 9
- 238000005457 optimization Methods 0.000 abstract description 6
- 238000012545 processing Methods 0.000 description 18
- 238000004364 calculation method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000005764 inhibitory process Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000011022 operating instruction Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 241000894007 species Species 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The application provides a method, a device, equipment and a medium for automatically and accurately splicing magnetic resonance full spine images, wherein at least two images to be spliced and corresponding scanning information are obtained, each image to be spliced is preprocessed, and one or more image pairs to be spliced are determined according to an overlapping area; extracting image characteristic points of each image pair to be spliced and carrying out coarse registration to obtain a corresponding registration matrix; sampling the image characteristic points and carrying out fine registration to optimize the registration matrix; and splicing the images to be spliced according to the optimized registration matrix, and fusing the spliced images. The method and the device have the advantages that the registration precision is high, the average deviation value of the inner points after fine registration is obviously reduced, and a good registration effect can be obtained for the slice layer far away from the center of the magnetic field through the optimization of the registration matrix.
Description
Technical Field
The application relates to the technical field of image splicing processing, in particular to a method, a device, equipment and a medium for automatically and accurately splicing a magnetic resonance full spine image.
Background
Due to the limitation of FOV (field of view), the whole spine information cannot be obtained at one time in magnetic resonance, and multiple times of shooting are needed to obtain the whole spine image by stitching the obtained partial spine images.
The existing full spine image splicing process has the following problems:
1) the same pixel resolution is often selected for the spliced images, and for a type of relatively small tissues such as the head, the pixels are large, the FOV is large, and the problems of long scanning time, reduced image contrast and the like can occur.
2) The magnetic resonance image characteristic information is not obvious, the intervertebral disc characteristic information is similar, and the matching precision is not high.
3) The problem that the image of the magnetic resonance deviation magnetic field center part is easy to deform and the like is solved, so that the splicing error of the image of the edge slice layer is increased.
4) After the magnetic resonance images are spliced, the spliced seams of the spliced images are easy to deform, and the fusion difficulty of the spliced images is increased.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present application is to provide a method, an apparatus, a device and a medium for automatically and precisely stitching a magnetic resonance full spine image, so as to solve the problems existing in the prior art during magnetic resonance full spine stitching.
To achieve the above and other related objects, the present application provides a method for automatic and precise stitching of a magnetic resonance full spine image, the method comprising: acquiring at least two images to be spliced and corresponding scanning information, preprocessing each image to be spliced, and determining one or more image pairs to be spliced according to an overlapping area; extracting image characteristic points of each image pair to be spliced and carrying out coarse registration to obtain a corresponding registration matrix; sampling the image characteristic points and carrying out fine registration to optimize the registration matrix; and splicing the images to be spliced according to the optimized registration matrix, and fusing the spliced images.
In an embodiment of the present application, the scan information includes: any one or more of image resolution, pixel resolution, image orientation, patient position, image layer thickness, number of image layers, and patient number.
In an embodiment of the present application, the acquiring at least two images to be stitched and corresponding scanning information, and preprocessing each image to be stitched includes: determining the splicing range and the splicing sequence of the images to be spliced; unifying the pixel resolution and the image resolution of each image to be spliced according to the scanning information; calculating a rotation matrix according to the scanning information, unifying the images to be spliced to a coordinate system through the rotation matrix, and acquiring slice images of the same body part at different scanning positions on the same slice through an interpolation method; and calculating three-dimensional coordinates of all the slice images on a sickbed coordinate system in which the slice images are positioned during scanning of the human body according to the unified coordinate system, and calculating a three-dimensional position overlapping region of the images to be spliced in the sickbed coordinate system according to the coordinate position of each part so as to determine one or more image pairs to be spliced.
In an embodiment of the application, the extracting of the image feature point information of each image pair to be spliced adopts a feature template to perform traversal calculation on feature points; the feature templates may be arranged in different shapes according to image characteristics.
In an embodiment of the present application, the sampling the image feature points and performing a fine registration to optimize the registration matrix includes: taking the image feature point of an image to be spliced in the image pair to be spliced as a two-dimensional image point set to be registered, and taking the image feature point of the other image to be spliced as a target image point set; respectively sampling the two-dimensional image point set to be registered and the target image point set; respectively calculating the inner points and the outer points and the position distances between the points, and iteratively optimizing the registration matrix according to the quantity proportion of the inner points and the outer points and the average distance between the points and the points; and further optimizing the registration matrix by combining the registration matrix of each layer and the position information of the whole FOV where the layer is located on the basis.
In an embodiment of the present application, the method includes: inquiring the registration matrix obtained by current iterative computation of each feature point on the two-dimensional image point set to be registered in a searching mode, and judging whether the registration matrix has a nearest neighbor point on the target image point set; if the nearest neighbor exists, the nearest neighbor is considered as an interior point; if not, the outer point is considered.
In an embodiment of the present application, the fusion process includes: calculating the final inner point set and the center of the inner point according to the inner point and the outer point calculated during the fine registration; setting a weight according to the deviation of the distance from each inner point to the center of the image to be registered; and adopting a template frame, multiplying the template frame by the weight to obtain a matrix, and sliding from the joint of the two images to be registered to finish the gradual change correction of the boundary.
To achieve the above and other related objects, the present application provides an automatic and precise stitching device for magnetic resonance full spine images, the device comprising: the system comprises a preprocessing module, a display module and a display module, wherein the preprocessing module is used for acquiring at least two images to be spliced and corresponding scanning information, preprocessing each image to be spliced and determining one or more image pairs to be spliced according to an overlapping area; the rough registration module is used for extracting image characteristic points of each image pair to be spliced and carrying out rough registration to obtain a corresponding registration matrix; the fine registration module is used for sampling the image characteristic points and performing fine registration to optimize the registration matrix; and the fusion module is used for splicing the images to be spliced according to the optimized registration matrix and fusing the spliced images.
To achieve the above and other related objects, the present application provides a computer apparatus, comprising: a memory, and a processor and communicator; the memory is to store computer instructions; the processor executes computer instructions to implement the method as described above.
To achieve the above and other related objects, the present application provides a computer readable storage medium storing computer instructions which, when executed, perform the method as described above.
In summary, according to the method, the device, the equipment and the medium for automatically and accurately splicing the magnetic resonance full spine image, each image to be spliced is preprocessed by acquiring at least two images to be spliced and corresponding scanning information, and one or more image pairs to be spliced are determined according to an overlapping region; extracting image characteristic points of each image pair to be spliced and carrying out coarse registration to obtain a corresponding registration matrix; sampling the image characteristic points and carrying out fine registration to optimize the registration matrix; and splicing the images to be spliced according to the optimized registration matrix, and fusing the spliced images.
Has the following beneficial effects:
the method and the device have the advantages that the registration precision is high, the average deviation value of the inner points after fine registration is obviously reduced, and a good registration effect can be obtained for the slice layer far away from the center of the magnetic field through the optimization of the registration matrix.
Drawings
Fig. 1 is a schematic flow chart illustrating an automatic and accurate stitching method for a magnetic resonance full spine image according to an embodiment of the present application.
FIG. 2 is a schematic view of a spine image according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a feature template according to an embodiment of the present application.
Fig. 4A-4B are simulation diagrams illustrating rough registration and fine registration between feature points of an image pair to be stitched according to an embodiment of the present disclosure.
Fig. 5A-5B are schematic views respectively showing the spine images of the same sequence of scans to be stitched according to an embodiment of the present application.
Fig. 6A-6B are schematic diagrams respectively illustrating overlapping regions of stitched images after stitching according to an embodiment of the present application.
Fig. 7A-7B are schematic diagrams respectively illustrating feature point extraction of images to be stitched according to an embodiment of the present application.
FIG. 8 is a schematic view of a full spine fusion and fusion according to an embodiment of the present application.
Fig. 9 is a block diagram of an apparatus for automatically and precisely stitching mri full-spine images according to an embodiment of the present disclosure.
Fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only schematic and illustrate the basic idea of the present application, and although the drawings only show the components related to the present application and are not drawn according to the number, shape and size of the components in actual implementation, the type, quantity and proportion of the components in actual implementation may be changed at will, and the layout of the components may be more complex.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
Fig. 1 is a schematic flow chart of an automatic precise stitching method for an mr full spine image according to an embodiment of the present invention. As shown, the method comprises:
step S101: the method comprises the steps of obtaining at least two images to be spliced and corresponding scanning information, preprocessing each image to be spliced, and determining one or more image pairs to be spliced according to an overlapping area.
In the application, at least two images to be spliced can be obtained by scanning the whole spine through magnetic resonance, and for example, 2-3 images are needed for an adult. The image to be stitched contains more important information, and the scanning information includes but is not limited to: any one or more of image resolution, pixel resolution (pixelspaging), image orientation, patient position, image layer thickness, number of image layers, and patient number. According to the related image information, corresponding processing information can be provided for steps such as post-processing and the like, for example, image resolution and pixel resolution are important for uniformly splicing image pixels.
In an embodiment of the present application, the preprocessing the images to be stitched includes:
A. and screening the splicing range and the splicing sequence of the images according to the patient numbers in the scanning information. For example, whether the images belong to the mosaicable range or not can be preliminarily screened according to information such as patient numbers, and the mosaicing sequence can be confirmed.
B. And unifying the pixel resolution and the image resolution of the images to be spliced according to the scanning information.
In short, the images to be stitched often adopt the same pixel resolution, and for a relatively small tissue such as a head, the problems of larger pixels and reduced contrast occur by adopting the same resolution.
To this end, the present application contemplates using flexible FOV and pixel resolution, such as selecting a small FOV scan for a smaller tissue site, i.e., increasing scan efficiency while increasing image contrast. Therefore, in this step, it is necessary to unify the pixel resolution and the image resolution based on the image information and the image correspondence scan information, for example, to unify the small FOV area and the large FOV area by interpolation.
C. And acquiring a rotation matrix of the image to be spliced according to the scanning information, unifying the image to be spliced to a unified coordinate system through the rotation matrix, and acquiring a slice image of which the overlapped area is on the same slice through an interpolation method.
Specifically, a rotation matrix corresponding to the spliced images can be calculated according to information such as image directions and patient positions in the image information, and the images to be spliced are unified to a unified coordinate system. The rotation matrix refers to a transformation matrix of an image coordinate system of any three-dimensional image to be spliced relative to a unified coordinate system, wherein the unified coordinate system refers to a patient bed coordinate system during scanning of a human body.
In consideration of the operational convenience of a doctor during a scanning process, the possibility that the coordinates of the scanning image layers of different parts are changed exists, so in the step, the image information of the same body part and the same slice in different scanning parts can be obtained by means of interpolation and the like on the basis of a unified coordinate system.
D. And calculating three-dimensional coordinates of all the slice images on a sickbed coordinate system in which the slice images are positioned during scanning of the human body according to the unified coordinate system, and calculating a three-dimensional position overlapping region of the images to be spliced in the sickbed coordinate system according to the coordinate position of each part so as to determine one or more image pairs to be spliced.
In brief, in the unified coordinate calculation, the three-dimensional coordinates of all the slice images on the patient bed coordinate system where the human body is scanned are calculated, so that the three-dimensional position overlapping region of the images can be calculated according to the coordinate position of each part (head 1, head 2 … … head n, spine 1, spine 2 … … spine n) in the patient bed coordinate system, the mosaicable range is determined, and the mosaicable image pair is selected from the image group to be mosaiced. Preferably, the term "mosaicable image pair" as used herein refers to an image pair selected from image pairs on the same layer after interpolation, wherein the image pair has a certain proportion of overlapping regions. It should be noted that the level unification in the present application mainly aims at the stitching of 2D images.
Step S102: and extracting image characteristic points of each image pair to be spliced and carrying out coarse registration to obtain a corresponding registration matrix.
In some examples, image feature information is obtained, and feature information of each pair of images can be extracted by using a feature calculation method, for example, a feature extraction algorithm such as Sift, Surf, Orb, etc. is used to extract feature point information.
It should be noted that, there are areas with similar local information and similar feature information for spine images, such as lumbar disc areas, which have similar shapes, and there is a problem that mismatching of single feature points occurs easily for such areas, such as the areas of rectangle 1 and rectangle 2 in fig. 2, which have similar image features.
This problem can be solved in the form of enlarging the feature calculation template, but enlarging the template will increase the calculation time and may cause the situation of missing boundary information. In order to solve the problem, in the present application, in addition to the conventional square templates such as 3 × 3 and 5 × 5, special templates such as cross-shaped templates and straight-shaped templates are selected in a targeted manner during feature calculation, and more effective feature information is obtained on the premise of reducing or maintaining the calculation amount, wherein the feature templates are as shown in fig. 3 below. As shown in the figure, (a) is a rectangular template, (b) a cross-shaped template, (c) is a linear template, and (d) is an oval template. For example, the redundancy analysis of each plate template has the following relationship: w ═ L125 (pixels), h L2L, b, 30 (pixels), W1=W2When α is 90 (degrees) and 5 (pixels), the number of pixels included in each of the four action domains can be: the template comprises 3111 rectangular templates, 1111 cross-shaped templates, 61 linear templates and 2347 elliptical templates.
It can be seen from the figure that if the feature information difference only appears in some specific directions, the linear, cross-shaped and other feature calculation templates are adopted, not only most feature information can be obtained, but also the calculation amount can be greatly reduced.
For example, when new feature information is obtained, if feature points are calculated by traversing the feature template, a special template, such as a cross shape, a straight shape, and the like, can be selected according to the features of the image. The feature points extracted here can be extracted by one or more algorithms, the extraction algorithm can consider point features such as harris, susan and the like, can also adopt line features such as log operators, and can also adopt plane features, and in this example, a method with higher scale invariance is mainly considered.
In addition to the above processing, the present application employs a method of global feature point registration. Firstly, an initial coarse registration matrix is obtained by adopting a traditional registration method or according to mechanical coordinate information aiming at different types of initial images.
The rough matching described herein may be obtained according to the coordinate information of the image, or may be obtained through algorithm calculation, such as feature matching calculation. In the preprocessing process, according to different original image information, one or more operations such as interpolation, pixel resolution unification, coordinate system unification and the like are carried out on the image, so that the rough matching method in the embodiment is selected according to the original image and the preprocessing condition thereof.
Step S103: and sampling the image characteristic points and carrying out fine registration to optimize the registration matrix.
On the basis of coarse registration, the method and the device perform fine registration on the feature points, calculate the position distance between the points, and further obtain a fine registration matrix by minimizing the average distance between the term point and the point, so as to obtain a more accurate registration effect.
In an embodiment of the present application, the step S103 specifically includes:
A. and taking the image characteristic point of an image to be spliced in the image pair to be spliced as a two-dimensional image point set to be registered, and taking the image characteristic point of the other image to be spliced as a target image point set.
B. And respectively sampling the two-dimensional image point set to be registered and the target image point set.
C. And respectively calculating the inner points and the outer points and the position distances between the points, and iteratively optimizing the registration matrix according to the quantity proportion of the inner points and the outer points and the average distance between the points and the points.
D. And further optimizing the registration matrix by combining the registration matrix of each layer and the position information of the whole FOV where the layer is located on the basis.
Briefly, the method for estimating the morphological pose of the two-dimensional point based on singular value decomposition is adopted to perform fine calculation on the two-dimensional full spine image registration matrix. Before fine registration, firstly obtaining characteristic point information of an image to be registered, and then, performing fine registration on the image according to the initial matching matrix of each image pair to be spliced and the image characteristic point information to obtain a registration matrix.
Feature extraction algorithms such as Sift, Surf, Orb and the like extract feature point information, the points are used as a two-dimensional image point set to be registered and a target image point set, and then the feature point set is subjected to sampling processing. And further optimizing the sampled set, calculating an inner point and an outer point, calculating the position distance between the points, and iteratively optimizing a registration matrix according to the quantity proportion of the inner point and the outer point and the average distance between the points, preferably to ensure that the average distance between the points is the minimum. The specific iterative process is as follows:
let R denote the rotation matrix, t denote the translation matrix, P2Representing a set of image points to be registered, P1Representing the target image point set, the registration matrix F (R, t) is expected to reach the following equation:
where F is the objective function, i.e., the average distance between the point where the minimum is desired and the point, as described above.
singular Value Decomposition (SVD): h ═ U ^ VT;
Then the optimized rotation matrix is: r ═ VUT;
The translation information can be further calculated by the rotation matrix R.
In this embodiment, after querying the registration matrix obtained by current iterative computation of each feature point on the two-dimensional image point set to be registered in a search manner, determining whether a nearest neighbor point exists on the target image point set; if the nearest neighbor exists, the nearest neighbor is considered as an interior point; if not, the outer point is considered.
In the above calculation process, referring to the concept of the inner point and the outer point, the present application queries whether each feature point on the moving image (to-be-registered image 1) has a nearest neighbor point on the target image (to-be-registered image 2) after being transformed by the registration matrix obtained by the current iterative computation by a search method, which may be performed by a method such as Kdtree. If the nearest neighbor exists, the point is considered as an interior point, and if the nearest neighbor does not exist, the point is considered as an exterior point.
The registration calculation method aims to optimize a registration matrix, so that the position difference between the feature point pairs of the image to be registered, which is calculated through the optimized registration matrix, is minimum. The optimization can be achieved by the method. And a rotation matrix is calculated, and the registration precision is improved. Fig. 4A shows a simulation diagram of coarse registration between feature points of an image pair to be stitched, and fig. 4B shows a simulation diagram of fine registration between feature points of an image pair to be stitched, where the position indicated by a rectangular box in the diagram is a stitching position.
In one or more embodiments of the present application, the method further comprises: the registration matrix is further optimized or fine tuned by combining image information.
Specifically, image information is synthesized, an image registration matrix is optimized, the image information includes but is not limited to feature information of each image to be stitched, and a position relation between each image pair to be stitched can be obtained in a preprocessing process. The image information obtained from different layers is different, the image distortion far away from the magnetic field center and near to the magnetic field center is different, the image quality is different, and the registration matrix of each layer of image pairs to be spliced among several parts (such as spine 1, spine 2 and … … spine n) obtained by the same patient in the same scanning process is the same. Checking each slice registration matrix, calculating the rotation angle and the translation amount of the matrix, and performing fine adjustment on the rotation matrix after fine registration.
In some embodiments, the slice stitching matrix away from the center of the magnetic field may be fine-tuned by a stitching matrix of several slices near the center of the magnetic field. For another example, the rotation angle between all slices should be less than a threshold value, within which the mean and variance of the rotation angles of all matrices are calculated, and the rotation angles and the amount of translation of the matrices with larger variance are fine-tuned according to the mean to be close to the mean. By adjusting, the aim of more accurate registration is achieved. The aim of the step is to achieve the aim of more accurate registration according to the optimization of the better-quality layer image registration matrix to the layer image registration matrix with poorer effect.
It should be noted that, due to the limitation of FOV (field of view), the full spine information of an adult often needs to take 2-3 partial spine images to obtain complete information. The purpose of image stitching is to calculate the position relation between the images to be stitched according to the overlapping part between the images to be stitched, and obtain a seamless, clear and large-range panoramic image through an image fusion technology, so that a doctor is assisted to obtain more information from the images, and the doctor is assisted to diagnose.
The panoramic image with the characteristics of seamless, clear and large range is obtained, the significance is that the information of a plurality of images is integrated, the integrity of original information needs to be ensured as comprehensively as possible in the integration process, and the integrated images have no connected gaps and can be more clearly viewed. One of the key points is that it is seamless, i.e. the accuracy of image registration is required. However, due to some characteristics of the MR image itself, there may be problems that the characteristic information is not obvious, the characteristic information is repeated, and the like. For example, for a layer far from the center of the magnet, the image is easy to deform, the feature information is not obvious, and the stitching error of the slice image is easy to increase at the edge, as shown in fig. 5A-5B, the spine image of the portion to be stitched scanned by the same person in the same sequence is shown, fig. 5A is an image of a position far from the center of the magnet, and fig. 5B is an image close to the center of the magnet. As can be seen by comparison, the farther from the center of the magnet in fig. 5A, the more likely the image is to be distorted, and the less obvious the characteristic information.
Aiming at the problem, after the registration calculation is completed, the 3D space position of the image is positioned according to the position information of the image, and the data such as the registration matrix, the image boundary deformation and the like are optimized by combining the calculated registration matrix group, so that the accuracy of the image registration is improved, and a more perfect splicing result is obtained.
Step S104: and splicing the images to be spliced according to the optimized registration matrix, and fusing the spliced images.
Although a precise registration matrix is obtained, the step aims to correct and link the engaged parts considering that the image may be deformed to a certain extent due to the problems of nonuniform magnetic field and the like.
In an embodiment of the present application, the fusion process includes:
A. calculating the final inner point set and the center of the inner point according to the inner point and the outer point calculated during the fine registration;
B. setting a weight according to the deviation of the distance from each inner point to the center of the image to be registered;
C. and adopting a template frame, multiplying the template frame by the weight to obtain a matrix, and sliding from the joint of the two images to be registered to finish the gradual change correction of the boundary.
Briefly, a gradual change method may be adopted in the present application, for example, in the present application, an inner point and an outer point may be calculated during a fine registration process, a center of the inner point, that is, an average value of coordinates of the inner point, is calculated through a final inner point set calculated, and since image deformation often occurs at an image boundary, a deviation of a distance from a central point to the inner point of two images to be stitched at the boundary is likely to increase, a weight is set according to a deviation of a distance from the center to each inner point of the two images to be registered, a template frame is adopted, and a matrix obtained by multiplying the template frame by the weight slides through an image joint, preferably, slides through a direction perpendicular to an image seam, for example, if the images are left-right stitched, the matrix slides through vertically, and completes gradual change correction of the boundary.
In addition to the distortion, there may be a difference in brightness at the overlapping portion due to the difference in signal intensity of the image pixels, as shown by the rectangular box in fig. 6A, and a transition may be made to this portion by a method of gradually-in and gradually-out. As shown in the simplified overlap area diagram 6B corresponding to fig. 6A, the position coordinates of the overlap portion 2 are calculated, and the gradient coefficient is B, and B varies from 0 to 1, that is, the upper boundary B is 0 and the lower boundary B is 1, according to the distance of each pixel of the overlap area from the upper boundary of the overlap portion 2. Let p1 denote the pixel value of image 1, p2 denote the pixel value of image 2, and the subscript i denotes the ith row of the overlap region, i.e., the overlap upper boundary i is 0, then the fused pixel value is:
pi=p1i×(1-b)+b×p2i;
the image 1 pixel values are smoothly transitioned to the image 2 pixel values by a fade-in fade-out calculation.
In addition, in some realizable embodiments, the present application also includes some other fusion processing methods.
1) And adjusting the whole image and splicing seamlessly.
The key of this step is seamless, the gray information of the same part may be different for different images, and the image display may be greatly different for the same tissue, for example, in the fat inhibition scanning process using the FatSat-FSE sequence, the fat inhibition effect on fat inhibition and off-center in a large range is general, there may be fat inhibition difference on off-center part, and the possibility of fat high signal is displayed on the image. The purpose of this step lies in to the adjustment to two concatenation image linking position and the whole image after the connection, makes its effect after can more clear seamless display concatenation.
2) Adjusting the window width and position of the image window and displaying the spliced image
The window width and window level of the images in different positions are different, the window width and window level is adjusted to be beneficial to displaying the images more clearly, and the step sets the window width and window level of the spliced images according to the window width and window level information of the images to be spliced, so that the images can be displayed better.
In this example, various feature point extraction methods are tried, and simulation shows that after feature points are extracted, feature point information of the same part of different images is different, as shown in fig. 7A and 7B, and it can be seen that the feature points in the rectangular frame shown in fig. 7A are not completely the same as the feature points in the rectangular frame shown in fig. 7B. The rectangular frames are overlapped portions, and although the result of scanning the same tissue portion is obtained, the characteristic information varies depending on the scanning time and the like. The algorithm calculation can assist in optimizing the registration matrix to a certain extent, such as RANSAC (random sample consensus) algorithm, BFMatcher (brute force matching) algorithm and the like, but the registration matrix may have a certain deviation, and the method adopting the fine registration has a positive effect on the optimization of the registration matrix.
In summary, the method and the device have high registration precision, the average deviation value of the inner points is obviously reduced after fine registration, and a good registration effect can be obtained for the slice layer far away from the center of the magnetic field through the optimization of the registration matrix.
Fig. 9 is a block diagram of an apparatus for automatically and precisely stitching mri full-spine images according to an embodiment of the present invention. As shown, the apparatus 900 includes:
the pre-processing module 901 is configured to obtain at least two images to be stitched and corresponding scanning information, pre-process each image to be stitched, and determine one or more image pairs to be stitched according to an overlapping region;
a coarse registration module 902, configured to extract image feature points of each image pair to be spliced and perform coarse registration to obtain a corresponding registration matrix;
a fine registration module 903, configured to perform sampling processing on the image feature points and perform fine registration to optimize the registration matrix;
and the fusion module 904 is configured to splice each image to be spliced according to the optimized registration matrix, and perform fusion processing on the spliced images.
It should be noted that, for the information interaction, execution process, and other contents between the modules/units of the system, since the same concept is based on the embodiment of the method described in this application, the technical effect brought by the embodiment of the method is the same as that of the embodiment of the method in this application, and specific contents may refer to the description in the foregoing embodiment of the method in this application, and are not described herein again.
It should be further noted that the division of the modules of the above system is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these units can be implemented entirely in software, invoked by a processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the fusion module 904 may be a separately established processing element, or may be integrated into a chip of the system, or may be stored in a memory of the system in the form of program code, and a processing element of the system calls and executes the functions of the fusion module 904. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown, the computer device 1000 includes: a memory 1001, and a processor; the memory 1001 is used for storing computer instructions; the processor 1002 executes computer instructions to implement the method described in fig. 1.
In some embodiments, the number of the memories 1001 in the computer device 1000 may be one or more, the number of the processors 1002 may be one or more, and fig. 10 is taken as an example.
In an embodiment of the present application, the processor 1002 in the computer device 1000 loads one or more instructions corresponding to processes of an application program into the memory 1001 according to the steps described in fig. 1, and the processor 1002 executes the application program stored in the memory 1001, thereby implementing the method described in fig. 1.
The Memory 1001 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 1001 stores an operating system and operating instructions, executable modules or data structures, or a subset thereof, or an expanded set thereof, wherein the operating instructions may include various operating instructions for implementing various operations. The operating system may include various system programs for implementing various basic services and for handling hardware-based tasks.
The Processor 1002 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In some specific applications, the various components of the computer device 1000 are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for the sake of clarity the various buses are referred to as a bus system in figure 10.
In an embodiment of the present application, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the method described in fig. 1.
The present application may be embodied as systems, methods, and/or computer program products, in any combination of technical details. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable programs described herein may be downloaded from a computer-readable storage medium to a variety of computing/processing devices, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present application by utilizing state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
In summary, the method, the device, the equipment and the medium for automatically and accurately splicing the magnetic resonance full spine image, provided by the application, are used for acquiring at least two images to be spliced and corresponding scanning information, preprocessing each image to be spliced and determining one or more images to be spliced according to an overlapping area; extracting image characteristic points of each image pair to be spliced and carrying out coarse registration to obtain a corresponding registration matrix; sampling the image characteristic points and carrying out fine registration to optimize the registration matrix; and splicing the images to be spliced according to the optimized registration matrix, and fusing the spliced images.
The application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present application.
Claims (10)
1. An automatic accurate splicing method for a magnetic resonance full spine image is characterized by comprising the following steps:
acquiring at least two images to be spliced and corresponding scanning information, preprocessing each image to be spliced, and determining one or more image pairs to be spliced according to an overlapping area;
extracting image characteristic points of each image pair to be spliced and carrying out coarse registration to obtain a corresponding registration matrix;
sampling the image characteristic points and carrying out fine registration to optimize the registration matrix;
and splicing the images to be spliced according to the optimized registration matrix, and fusing the spliced images.
2. The method of claim 1, wherein the scan information comprises: any one or more of image resolution, pixel resolution, image orientation, patient position, image layer thickness, number of image layers, and patient number.
3. The method according to claim 2, wherein the acquiring at least two images to be stitched and corresponding scanning information to pre-process each image to be stitched comprises:
determining the splicing range and the splicing sequence of the images to be spliced;
unifying the pixel resolution and the image resolution of each image to be spliced according to the scanning information;
calculating a rotation matrix according to the scanning information, unifying the images to be spliced to a coordinate system through the rotation matrix, and acquiring slice images of the same body part at different scanning positions on the same slice through an interpolation method;
and calculating three-dimensional coordinates of all the slice images on a sickbed coordinate system in which the slice images are positioned during scanning of the human body according to the unified coordinate system, and calculating a three-dimensional position overlapping region of the images to be spliced in the sickbed coordinate system according to the coordinate position of each part so as to determine one or more image pairs to be spliced.
4. The method according to claim 1, wherein the extracting of the image feature point information of each image pair to be spliced calculates feature points by traversing a feature template; the feature templates may be arranged in different shapes according to image characteristics.
5. The method according to claim 1, wherein the sampling the image feature points and performing the fine registration to optimize the registration matrix comprises:
taking the image feature point of an image to be spliced in the image pair to be spliced as a two-dimensional image point set to be registered, and taking the image feature point of the other image to be spliced as a target image point set;
respectively sampling the two-dimensional image point set to be registered and the target image point set;
respectively calculating the inner points and the outer points and the position distances between the points, and iteratively optimizing the registration matrix according to the quantity proportion of the inner points and the outer points and the average distance between the points and the points;
and further optimizing the registration matrix by combining the registration matrix of each layer and the position information of the whole FOV where the layer is located on the basis.
6. The method of claim 5, wherein the method comprises:
inquiring the registration matrix obtained by current iterative computation of each feature point on the two-dimensional image point set to be registered in a searching mode, and judging whether the registration matrix has a nearest neighbor point on the target image point set;
if the nearest neighbor exists, the nearest neighbor is considered as an interior point; if not, the outer point is considered.
7. The method of claim 5, wherein the fusion process comprises:
calculating the final inner point set and the center of the inner point according to the inner point and the outer point calculated during the fine registration;
setting a weight according to the deviation of the distance from each inner point to the center of the image to be registered;
and adopting a template frame, multiplying the template frame by the weight to obtain a matrix, and sliding from the joint of the two images to be registered to finish the gradual change correction of the boundary.
8. An automatic accurate splicing apparatus of magnetic resonance full spine image, the apparatus comprising:
the system comprises a preprocessing module, a display module and a display module, wherein the preprocessing module is used for acquiring at least two images to be spliced and corresponding scanning information, preprocessing each image to be spliced and determining one or more image pairs to be spliced according to an overlapping area;
the rough registration module is used for extracting image characteristic points of each image pair to be spliced and carrying out rough registration to obtain a corresponding registration matrix;
the fine registration module is used for sampling the image characteristic points and performing fine registration to optimize the registration matrix;
and the fusion module is used for splicing the images to be spliced according to the optimized registration matrix and fusing the spliced images.
9. A computer device, the device comprising: a memory, and a processor; the memory is to store computer instructions; the processor executes computer instructions to implement the method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer instructions which, when executed, perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210178574.2A CN114549320A (en) | 2022-02-25 | 2022-02-25 | Automatic and accurate splicing method, device, equipment and medium for magnetic resonance full spine image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210178574.2A CN114549320A (en) | 2022-02-25 | 2022-02-25 | Automatic and accurate splicing method, device, equipment and medium for magnetic resonance full spine image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114549320A true CN114549320A (en) | 2022-05-27 |
Family
ID=81679948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210178574.2A Pending CN114549320A (en) | 2022-02-25 | 2022-02-25 | Automatic and accurate splicing method, device, equipment and medium for magnetic resonance full spine image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549320A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116523982A (en) * | 2023-05-12 | 2023-08-01 | 北京长木谷医疗科技股份有限公司 | Sparse point cloud registration method and device based on similarity compatibility measurement |
-
2022
- 2022-02-25 CN CN202210178574.2A patent/CN114549320A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116523982A (en) * | 2023-05-12 | 2023-08-01 | 北京长木谷医疗科技股份有限公司 | Sparse point cloud registration method and device based on similarity compatibility measurement |
CN116523982B (en) * | 2023-05-12 | 2024-05-03 | 北京长木谷医疗科技股份有限公司 | Sparse point cloud registration method and device based on similarity compatibility measurement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8861868B2 (en) | Patch-based synthesis techniques | |
Sorzano et al. | Elastic registration of biological images using vector-spline regularization | |
EP2491531B1 (en) | Alignment of an ordered stack of images from a specimen. | |
JP5337354B2 (en) | System and method for geometric registration | |
US8831382B2 (en) | Method of creating a composite image | |
CN110111250B (en) | Robust automatic panoramic unmanned aerial vehicle image splicing method and device | |
CN111583120B (en) | Image stitching method, device, equipment and storage medium | |
JPH06223159A (en) | Method for three-dimensional imaging | |
CN112184888A (en) | Three-dimensional blood vessel modeling method and device | |
Xiaohua et al. | Simultaneous segmentation and registration for medical image | |
Feuerstein et al. | Reconstruction of 3-D histology images by simultaneous deformable registration | |
JPH0528243A (en) | Image-forming device | |
CN114549320A (en) | Automatic and accurate splicing method, device, equipment and medium for magnetic resonance full spine image | |
Tella-Amo et al. | Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy | |
Saalfeld | Computational methods for stitching, alignment, and artifact correction of serial section data | |
CN118135111A (en) | Three-dimensional reconstruction method and device and electronic equipment | |
CN104182937B (en) | A kind of method and system for strengthening symmetrical shape | |
Jiang et al. | Regions of interest extraction from SPECT images for neural degeneration assessment using multimodality image fusion | |
Oliveira et al. | New technique for binary morphological shape-based interpolation | |
Collignon et al. | New high-performance 3D registration algorithms for 3D medical images | |
Liu | Improving Image Stitching Effect using Super-Resolution Technique. | |
Marsland et al. | Conformal image registration based on constrained optimization | |
Lim | Achieving accurate image registration as the basis for super-resolution | |
Barreto et al. | Non-static object reconstruction system based on multiple RGB-D cameras | |
CN118015237B (en) | Multi-view image stitching method and system based on global similarity optimal seam |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |