CN106889999B - The method and apparatus of multi-modal detection system image co-registration - Google Patents
The method and apparatus of multi-modal detection system image co-registration Download PDFInfo
- Publication number
- CN106889999B CN106889999B CN201611239157.5A CN201611239157A CN106889999B CN 106889999 B CN106889999 B CN 106889999B CN 201611239157 A CN201611239157 A CN 201611239157A CN 106889999 B CN106889999 B CN 106889999B
- Authority
- CN
- China
- Prior art keywords
- coordinate
- pixel
- image
- point source
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 117
- 238000000034 method Methods 0.000 title claims abstract description 37
- 239000011159 matrix material Substances 0.000 claims abstract description 174
- 230000007704 transition Effects 0.000 claims abstract description 155
- 238000012937 correction Methods 0.000 claims description 14
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000002591 computed tomography Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 239000000523 sample Substances 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000002059 diagnostic imaging Methods 0.000 abstract description 2
- 238000006243 chemical reaction Methods 0.000 description 42
- 238000013461 design Methods 0.000 description 35
- 230000015654 memory Effects 0.000 description 13
- 238000012545 processing Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000003325 tomography Methods 0.000 description 5
- 238000001948 isotopic labelling Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 210000003484 anatomy Anatomy 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000012633 nuclear imaging Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- SHZGCJCMOBCMKK-UHFFFAOYSA-N D-mannomethylose Natural products CC1OC(O)C(O)C(O)C1O SHZGCJCMOBCMKK-UHFFFAOYSA-N 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- PMMURAAUARKVCB-UHFFFAOYSA-N alpha-D-ara-dHexp Natural products OCC1OC(O)CC(O)C1O PMMURAAUARKVCB-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000009131 signaling function Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
- A61B6/5235—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses a kind of method and apparatus of multi-modal detection system image co-registration, belong to medical imaging technology field.Method includes: to detect target object by multi-modal detection system, obtains the first image and the second image, and the FOV coordinate system of the first image is the first coordinate system, and the FOV coordinate system of the second image is the second coordinate system;The first transition matrix is obtained, the first transition matrix is used to coordinate of the first pixel of each of first image in the first coordinate system being converted to coordinate in the second coordinate system;According to the first coordinate of the first transition matrix and each first pixel in the first coordinate system, the second coordinate of each first pixel in the second coordinate system is determined;The first image and the second image co-registration are obtained into multi-modal detection image according to the third coordinate of the second pixel of each of the second coordinate of each first pixel and the second image in the second coordinate system.The present invention improves the accuracy of the first image and the second image co-registration.
Description
Technical field
The present invention relates to medical imaging technology field, in particular to a kind of method of multi-modal detection system image co-registration and
Device.
Background technique
Include two kinds of detectors in multi-modal detection system, same target object is visited by both detectors
It surveys, obtains two images;Two images are merged, multi-modal detection image is obtained, by multi-modal detection image into
Row analysis, analyzes target object with realizing.For example, PET-CT (Positron Emission Tomography-
Computed Tomography, positron emission tomography-computed tomography) it include pet detector in detection system
With CT detector, the PET image of human body inner disease foci is detected by pet detector, and human organ tissue is detected by CT detector
CT image, PET image and CT image are merged, PET-CT image is obtained;Doctor is according to the PET-CT image, energy in this way
Enough intracorporal lesions of finder, and lesions position is accurately positioned.
For the ease of merging PET image and CT image, PET image is being detected by pet detector, and pass through
Before CT detector detects CT image, engineer manually adjusts pet detector and CT detector, by the FOV of pet detector
(Field-of-View, the visual field) coordinate system is consistent with the adjusting of the FOV coordinate system of CT detector;Namely the FOV of pet detector is sat
Mark system origin and CT detector FOV coordinate system origin be overlapped, the X, Y, Z axis of the FOV coordinate system of pet detector respectively with
The X, Y, Z axis of the FOV coordinate system of CT detector is parallel.
When being merged PET image and CT image, the controlling terminal in PET-CT detection system is according in PET image
Each of the second pixel of the first coordinate and each of CT image of the first pixel third coordinate, by PET image and CT
Image is merged, and PET-CT image is obtained, and the first coordinate of each first pixel is each first pixel
Coordinate in the FOV coordinate system of the pet detector, the third coordinate of each second pixel are described each second
Coordinate of the pixel in the FOV coordinate system of the CT detector.
In the implementation of the present invention, the inventor finds that the existing technology has at least the following problems:
Due to the limitation of mechanical adjustment precision, between the FOV coordinate system of pet detector and the FOV coordinate system of CT detector
Usually there is error, the accuracy so as to cause above-mentioned image co-registration is poor.
Summary of the invention
In order to solve problems in the prior art, the present invention provides a kind of method of multi-modal detection system image co-registration and
Device.Technical solution is as follows:
In a first aspect, the embodiment of the invention provides a kind of method of multi-modal detection system image co-registration, the method
Include:
Target object is detected by multi-modal detection system, obtains the first image and the second image, the first image
Visual field FOV coordinate system is the first coordinate system, and the FOV coordinate system of second image is the second coordinate system;
The first transition matrix is obtained, first transition matrix is used for the first pixel of each of the first image
The coordinate in second coordinate system is converted in the coordinate in first coordinate system;
According to the first coordinate of first transition matrix and each first pixel, each first picture is determined
Second coordinate of vegetarian refreshments, the first coordinate of each first pixel are each first pixel in first coordinate
Coordinate in system, the second coordinate of each first pixel are each first pixel in second coordinate system
Coordinate;
According to the of the second pixel of each of the second coordinate of each first pixel and second image
Three coordinates merge the first image and second image, obtain multi-modal detection image, each second picture
The third coordinate of vegetarian refreshments is coordinate of each second pixel in second coordinate system.
In a kind of possible design, the first transition matrix of the acquisition, comprising:
Coordinate registration die body is detected by the multi-modal detection system, obtains third image and the 4th image, the seat
Standard configuration quasi-mode body includes n point source, and the n point source is the object that can be detected by the multi-modal detection system, described
Object can be the F for being mixed with medical lipiodol18- FDG solution or solid-state point source, the FOV coordinate system of the third image are the
One coordinate system, the FOV coordinate system of the 4th image are the second coordinate system, and n is the integer more than or equal to 4;
It determines the 4-coordinate of each point source in the n point source in the third image, and determines described the
The Five Axis of each point source in four images, the 4-coordinate of each point source are each point source described the
Coordinate in one coordinate system, the Five Axis of each point source are coordinate of each point source in second coordinate system;
According to the 4-coordinate and Five Axis of each point source, first transition matrix is determined.
In a kind of possible design, each point source in the n point source in the determination third image
4-coordinate, comprising:
For each point source, the sphere where the point source is determined in the third image, obtains the sphere
The pixel value and the 6th coordinate of interior each third pixel, the 6th coordinate of each third pixel are each third
Coordinate of the pixel in first coordinate system;
The point is determined by following formula one (1) according to the pixel value and the 6th coordinate of each third pixel
The 4-coordinate in source:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) it is the of each third pixel
Six coordinates, PixelValuepFor the pixel value of each third pixel.
In a kind of possible design, the Five Axis of each point source in determination the 4th image, packet
It includes:
For each point source, the sphere where the point source is determined in the 4th image, obtains the sphere
The pixel value and the 9th coordinate of interior each 4th pixel, the 9th coordinate of each 4th pixel are the described each 4th
Coordinate of the pixel in second coordinate system;
The point is determined by following formula one (2) according to the pixel value and the 9th coordinate of each 4th pixel
The Five Axis in source:
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) it is the of each 4th pixel
Nine coordinates, PixelValueqFor the pixel value of each 4th pixel.
In a kind of possible design, the 4-coordinate and Five Axis according to each point source, determine described in
First transition matrix, comprising:
According to the 4-coordinate of each point source, the transformed representation of Five Axis and first transition matrix, really
The coordinate difference expression formula of fixed each point source;
According to the coordinate difference expression formula of each point source, total coordinate difference expression formula of the multiple point source is determined;
According to total coordinate difference expression formula, first transition matrix for meeting preset condition is determined, it is described default
Condition is that the value of total coordinate difference expression formula is minimum value.
In a kind of possible design, 4-coordinate, the Five Axis and described first according to each point source
The transformed representation of transition matrix determines the coordinate difference expression formula of each point source, comprising:
It is determined described every according to the 4-coordinate and the transformed representation of each point source by following formula two
7th coordinate of a point source, the 7th coordinate of each point source are seat of each point source in second coordinate system
Mark:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
Each point source is determined by following formula three according to the Five Axis and the 7th coordinate of each point source
Coordinate difference expression formula:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) it is the 5th of each point source
Coordinate, (x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference table
Up to formula.
In a kind of possible design, the multi-modal detection system is positron emission tomography-computerized tomography
PET-CT detection system is scanned, the first image is the PET image of pet detector detection, and second image is CT detection
The CT image of device detection, first coordinate according to first transition matrix and each first pixel determine institute
Before the second coordinate for stating each first pixel, the method also includes:
According to first transition matrix, the second transition matrix is obtained, second transition matrix is used for described second
The second pixel of each of image is converted to the coordinate in first coordinate system in the coordinate in second coordinate system;
According to the third coordinate of second transition matrix and each second pixel, each second picture is determined
8th coordinate of vegetarian refreshments, the 8th coordinate of each second pixel are each second pixel in first coordinate
Coordinate in system;
According to the first coordinate of the 8th coordinate of each second pixel and each first pixel, from described
The attenuation coefficient of each first pixel is determined in second image;
According to the attenuation coefficient of each first pixel, the first image is corrected.
Second aspect, the present invention provide a kind of device of multi-modal detection system image co-registration, described device in implementing
Include:
Detecting module obtains the first image and the second image, institute for detecting target object by multi-modal detection system
The visual field FOV coordinate system for stating the first image is the first coordinate system, and the FOV coordinate system of second image is the second coordinate system;
First obtains module, and for obtaining the first transition matrix, first transition matrix is used for the first image
Each of the first pixel be converted to the coordinate in second coordinate system in the coordinate in first coordinate system;
First determining module, for the first coordinate according to first transition matrix and each first pixel,
Determine that the second coordinate of each first pixel, the first coordinate of each first pixel are each first picture
Coordinate of the vegetarian refreshments in first coordinate system, the second coordinate of each first pixel are each first pixel
Coordinate in second coordinate system;
Fusion Module, for according to each of the second coordinate of each first pixel and described second image
The third coordinate of two pixels merges the first image and second image, obtains multi-modal detection image, institute
The third coordinate for stating each second pixel is coordinate of each second pixel in second coordinate system.
In a kind of possible design, described first obtains module, comprising:
Probe unit, for by the multi-modal detection system detection coordinate registration die body, obtaining third image and the
Four images, the coordinate registration die body include n point source, and the n point source is that can be detected by the multi-modal detection system
The object arrived, the object can be the F for being mixed with medical lipiodol18- FDG solution or solid-state point source, the third image
FOV coordinate system is the first coordinate system, and the FOV coordinate system of the 4th image is the second coordinate system, and n is whole more than or equal to 4
Number;
Determination unit, for determining the 4-coordinate of each point source in the n point source in the third image, with
And determining the Five Axis of each point source in the 4th image, the 4-coordinate of each point source is described each
Coordinate of the point source in first coordinate system, the Five Axis of each point source are that each point source is sat described second
Mark the coordinate of system;According to the 4-coordinate and Five Axis of each point source, first transition matrix is determined.
In a kind of possible design, the determination unit, comprising:
First determines subelement, is used for for each point source, in the third image where the determining point source
Sphere, obtain the pixel value and the 6th coordinate of each third pixel in the sphere, the of each third pixel
Six coordinates are coordinate of each third pixel in first coordinate system;
Second determines subelement, for the pixel value and the 6th coordinate according to each third pixel, by following
Formula one (1) determines the 4-coordinate of the point source:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) it is the of each third pixel
Six coordinates, PixelValuepFor the pixel value of each third pixel.
In a kind of possible design, the determination unit, comprising:
Third determines subelement, is used for for each point source, in the 4th image where the determining point source
Sphere, obtain the pixel value and the 9th coordinate of each 4th pixel in the sphere, the of each 4th pixel
Nine coordinates are coordinate of each 4th pixel in second coordinate system;
4th determines subelement, for the pixel value and the 9th coordinate according to each 4th pixel, by following
Formula one (2) determines the Five Axis of the point source:
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) it is the of each 4th pixel
Nine coordinates, PixelValueqFor the pixel value of each 4th pixel.
In a kind of possible design, the determination unit, comprising:
5th determines subelement, for according to the 4-coordinate of each point source, Five Axis and first conversion
The transformed representation of matrix determines the coordinate difference expression formula of each point source;
6th determines that subelement determines the multiple point source for the coordinate difference expression formula according to each point source
Total coordinate difference expression formula;
7th determines subelement, meets described the of preset condition for determining according to total coordinate difference expression formula
One transition matrix, the preset condition are that the value of total coordinate difference expression formula is minimum value.
In a kind of possible design, the described 5th determines subelement, is also used to sit according to the 4th of each point source the
Mark and the transformed representation determine the 7th coordinate of each point source by following formula two, the of each point source
Seven coordinates are coordinate of each point source in second coordinate system:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
It is described 5th determine subelement, be also used to the Five Axis and the 7th coordinate according to each point source, by with
Lower formula three determines the coordinate difference expression formula of each point source:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) it is the 5th of each point source
Coordinate, (x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference table
Up to formula.
In a kind of possible design, the multi-modal detection system is positron emission tomography-computerized tomography
PET-CT detection system is scanned, the first image is the PET image of pet detector detection, and second image is CT detection
The CT image of device detection, described device further include:
Second obtains module, for obtaining the second transition matrix, the second conversion square according to first transition matrix
Battle array is for being converted to coordinate of the second pixel of each of second image in second coordinate system described the
Coordinate in one coordinate system;
Second determining module, for the third coordinate according to second transition matrix and each second pixel,
Determine that the 8th coordinate of each second pixel, the 8th coordinate of each second pixel are each second picture
Coordinate of the vegetarian refreshments in first coordinate system;
Third determining module, for the 8th coordinate and each first pixel according to each second pixel
The first coordinate, the attenuation coefficient of each first pixel is determined from second image;
Correction module is corrected the first image for the attenuation coefficient according to each first pixel.
In the embodiment of the present invention, after controlling terminal detects the first image and the second image of target object, first is obtained
Transition matrix, and according to the first coordinate of the first pixel each in first transition matrix and the first image, by this each
First coordinate of one pixel is converted to second coordinate of each first pixel in the second coordinate system, and then, control is eventually
The third coordinate further according to the second pixel of each of the second coordinate of each first pixel and second image is held, it will
First image and second image co-registration are multi-modal detection image, due to first transition matrix be will be in the first image
First coordinate of each first pixel is converted to the conversion of second coordinate of each first pixel in the second coordinate system
Matrix, it therefore reduces the error between the first coordinate system of the first image and the second coordinate system of the second image, further mentions
It is high by the first image and the second image co-registration when accuracy.
Detailed description of the invention
Fig. 1 is a kind of method flow diagram of multi-modal detection system image co-registration provided in an embodiment of the present invention;
Fig. 2 is a kind of method flow diagram of multi-modal detection system image co-registration provided in an embodiment of the present invention;
Fig. 3 is a kind of coordinate registration die body structure chart provided in an embodiment of the present invention;
Fig. 4 is a kind of coordinate registration die body top view provided in an embodiment of the present invention;
Fig. 5 is a kind of method flow diagram of multi-modal detection system image co-registration provided in an embodiment of the present invention;
Fig. 6 is a kind of apparatus structure schematic diagram of multi-modal detection system image co-registration provided in an embodiment of the present invention;
Fig. 7 is a kind of apparatus structure schematic diagram (control of multi-modal detection system image co-registration provided in an embodiment of the present invention
The general structure of terminal processed).
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
The embodiment of the invention provides a kind of method of multi-modal detection system image co-registration, this method is applied to any more
In mode detection system, which can be PET-CT detector system or SPECT-MRI (Single-
Photon Emission Computed Tomography-Magnetic Resonance Imaging, single photon emission calculate
Machine tomography-magnetic resonance imaging) detector system.In embodiments of the present invention, can be with multi-modal detection system
It is illustrated for PET-CT detector system, but specific limit is not made to multi-modal detection system.
PET-CT detection system includes nuclear imaging detector, anatomy detector and controlling terminal;The nuclear imaging detector
It can be pet detector, anatomy detector can be CT detector.Pet detector is used to detect the PET figure of target object
Picture, CT detector are used to detect the CT image of target object, and controlling terminal is used for PET image and CT image synthesis PET-CT figure
Picture;Executing subject of the invention can be controlling terminal.
It should be noted that the controlling terminal of PET-CT detection system can pass through pet detector in the embodiment of the present invention
Target object is detected with CT detector, obtains the first image and the second image of the target object;The control of PET-CT detection system
Terminal processed can detect coordinate registration die body by pet detector and CT detector, obtain the third figure of the coordinate registration die body
Picture and the 4th image.
Referring to Fig. 1, this method comprises:
Step 101: target object being detected by multi-modal detection system, obtains the first image and the second image, this first
The visual field FOV coordinate system of image is the first coordinate system, and the FOV coordinate system of second image is the second coordinate system.
Step 102: obtaining the first transition matrix, which is used for the first pixel of each of first image
Coordinate o'clock in the first coordinate system is converted to coordinate in the second coordinate system.
Step 103: according to first coordinate of first transition matrix and each first pixel, determining each first picture
Second coordinate of vegetarian refreshments, the first coordinate of each first pixel are the seat of each first pixel in the first coordinate system
Mark, the second coordinate of each first pixel are the coordinate of each first pixel in the second coordinate system.
Step 104: according to the second pixel of each of the second coordinate of each first pixel and second image
Third coordinate, first image and second image are merged, obtain multi-modal detection image, each second pixel
The third coordinate of point is the coordinate of each second pixel in second coordinate system.
In a kind of possible design, first transition matrix of acquisition, comprising:
Coordinate registration die body is detected by the multi-modal detection system, obtains third image and the 4th image, which matches
Quasi-mode body includes n point source, which is the object that can be detected by the multi-modal detection system, which can be
It is mixed with the F of medical lipiodol18- FDG solution or solid-state point source, the FOV coordinate system of the third image are the first coordinate system, this
The FOV coordinate system of four images is the second coordinate system, and n is the integer more than or equal to 4;
It determines the 4-coordinate of each point source in n point source in the third image, and determines in the 4th image
Each point source Five Axis, the 4-coordinate of each point source is the coordinate of each point source in the first coordinate system, should
The Five Axis of each point source is coordinate of each point source in the second coordinate system;
According to the 4-coordinate and Five Axis of each point source, the first transition matrix is determined.
In a kind of possible design, the 4-coordinate of each point source in n point source in the determination third image,
Include:
For each point source, the sphere where the point source is determined in the third image, obtains each third in the sphere
The pixel value of pixel and the 6th coordinate, the 6th coordinate of each third pixel are that each third pixel is sat first
Coordinate in mark system;
The point source is determined by following formula one (1) according to the pixel value of each third pixel and the 6th coordinate
4-coordinate:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) sat for the 6th of each third pixel
Mark, PixelValuepFor the pixel value of each third pixel.
In a kind of possible design, the Five Axis of each point source in the 4th image of determination, comprising:
For each point source, the sphere where the point source is determined in the 4th image, is obtained each the in the sphere
The pixel value of four pixels and the 9th coordinate, the 9th coordinate of each 4th pixel be each 4th pixel this
Coordinate in two coordinate systems;
The point source is determined by following formula one (2) according to the pixel value and the 9th coordinate of each 4th pixel
Five Axis:
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) sat for the 9th of each 4th pixel
Mark, PixelValueqFor the pixel value of each 4th pixel.
In a kind of possible design, which determines this first turn
Change matrix, comprising:
According to the 4-coordinate of each point source, the transformed representation of Five Axis and first transition matrix, determine each
The coordinate difference expression formula of point source;
According to the coordinate difference expression formula of each point source, total coordinate difference expression formula of multiple point sources is determined;
According to total coordinate difference expression formula, first transition matrix for meeting preset condition is determined, which is
The value of total coordinate difference expression formula is minimum value.
In a kind of possible design, 4-coordinate, Five Axis and first transition matrix according to each point source
Transformed representation, determine the coordinate difference expression formula of each point source, comprising:
The of each point source is determined by following formula two according to the 4-coordinate of each point source and the transformed representation
7th coordinate of seven coordinates, each point source is the coordinate of each point source in the second coordinate system:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
The coordinate of each point source is determined by following formula three according to the Five Axis and the 7th coordinate of each point source
Differential expression formula:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) be each point source Five Axis,
(x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference expression formula.
In a kind of possible design, which is that positron emission tomography-computerized tomography is swept
PET-CT detection system is retouched, which is the PET image of pet detector detection, which is the detection of CT detector
CT image, which determines each first pixel
The second coordinate before, this method further include:
According to first transition matrix, the second transition matrix is obtained, which is used for will be in second image
Each of the coordinate of the second pixel in the second coordinate system be converted to the coordinate in the first coordinate system;
According to the third coordinate of second transition matrix and each second pixel, the 8th of each second pixel is determined
Coordinate, the 8th coordinate of each second pixel are coordinate of each second pixel in the first coordinate system;
According to the first coordinate of the 8th coordinate of each second pixel and each first pixel, from the second image
The attenuation coefficient of middle each first pixel of determination;
According to the attenuation coefficient of each first pixel, the first image is corrected.
In the embodiment of the present invention, after controlling terminal detects the first image and the second image of target object, first is obtained
Transition matrix, and according to the first coordinate of the first pixel each in first transition matrix and the first image, by this each
First coordinate of one pixel is converted to second coordinate of each first pixel in the second coordinate system, and then, control is eventually
The third coordinate further according to the second pixel of each of the second coordinate of each first pixel and second image is held, it will
First image and second image co-registration are multi-modal detection image, due to first transition matrix be will be in the first image
First coordinate of each first pixel is converted to the conversion of second coordinate of each first pixel in the second coordinate system
Matrix, it therefore reduces the error between the first coordinate system of the first image and the second coordinate system of the second image, further mentions
It is high by the first image and the second image co-registration when accuracy.
Since there are errors between the FOV coordinate system of pet detector and the FOV coordinate system of CT detector, so as to cause control
Terminal processed is poor by the accuracy of PET image and CT image co-registration;In embodiments of the present invention, controlling terminal obtains a conversion square
Battle array, the transition matrix are used to PET image being converted to the image in the FOV coordinate system of CT detector, will by the transition matrix
PET image is converted to the image in the FOV coordinate system of CT detector, and the PET image after CT image and conversion is merged, is obtained
To PET-CT image.Since the coordinate system of PET image after CT image and conversion is all the FOV coordinate system of CT detector, reduce
Error between the coordinate system of PET image and the coordinate system of CT image, therefore improve the accuracy of image co-registration.
Alternatively, controlling terminal obtains a transition matrix, which is used to be converted to CT image under PET detection
CT image is converted to the image in the FOV coordinate system of pet detector by the transition matrix by the image in FOV coordinate system, will
CT image after PET image and conversion is merged, and PET-CT image is obtained.Due to the seat of CT image after PET image and conversion
Mark system is all the FOV coordinate system of pet detector, reduces the error between the coordinate system of PET image and the coordinate system of CT image,
Therefore the accuracy of image co-registration is improved, also, improves the precision of correction for attenuation.
Referring to fig. 2, the embodiment of the present invention obtains transition matrix by following steps 201-203, comprising:
Step 201: controlling terminal detects coordinate registration die body by the multi-modal detection system, obtains third image and the
Four images, the coordinate registration die body include n point source, which is pair that can be detected by the multi-modal detection system
As the object can be the F for being mixed with medical lipiodol18The FOV coordinate system of-FDG solution or solid-state point source, the third image is
First coordinate system, the FOV coordinate system of the 4th image are the second coordinate system, and n is the integer more than or equal to 4.
In the embodiment of the present invention, PET-CT equipment may include pet detector and CT detector, and therefore, this step can be with
It is realized by following first way or the second way.
For the first implementation, this step can be with are as follows:
Controlling terminal detects the coordinate registration die body by pet detector, obtains the third image of the coordinate registration die body,
Controlling terminal detects the coordinate registration die body by CT detector, obtains the 4th image of the coordinate registration die body.
Wherein, using the FOV coordinate system of third image as the first coordinate system, the i.e. corresponding FOV coordinate system of pet detector,
Using the FOV coordinate system of the 4th image as the second coordinate system, the i.e. corresponding FOV coordinate system of CT detector.
In this step, when controlling terminal detects the coordinate registration die body by pet detector, have in pet detector multiple
Detector cells pair, pet detector can by multiple detector cells to detect the coordinate registration die body in three-dimensional space
In the photon pair launched, form the third image of the coordinate registration die body;Controlling terminal detects the coordinate by CT detector
When being registrated die body, CT detector detects the coordinate registration die body real-time perfoming, obtains multiple CT figure of the coordinate registration die body
Multiple CT images are formed the 4th image by picture.
It include multiple point sources in the coordinate registration die body, controlling terminal is to the coordinate registration die body in the embodiment of the present invention
In point source detected, the point source can be mixed with medical lipiodol by the solution of isotope labeling or solid-state point source, should
A pair of contrary photon pair can be launched by the solution of isotope labeling, wherein the solution by isotope labeling can be
F18- FDG (2-Fluorine-18-Fluoro-2-deeoxy-D-glucose, 2- Value linear-fluoro- 2-deoxy-D-glucose) is molten
Liquid, or other objects that can be detected simultaneously by multi-modal detection system, the embodiment of the present invention are not made this specifically
It limits.
Therefore, controlling terminal detects the coordinate registration die body by pet detector, obtains the third of the coordinate registration die body
The step of image, can be with are as follows: controlling terminal detects the photon pair of coordinate registration die body sending by pet detector, and will detection
The detection data of the photon pair arrived forms third image.
Wherein, which may include position and number of the photon pair that the pet detector detects etc.;Due to appointing
Meaning four can not uniquely determine a three-dimensional space in conplane point, and therefore, which includes at least four
It is a not in conplane point source, i.e. n is the integer more than or equal to 4.
In the embodiment of the present invention, to be illustrated for including the coordinate registration die body of six point sources, as shown in figure 3, should
The structure of coordinate registration die body can be ladder configuration, include six point sources in the coordinate registration die body, six point sources are along vertical
It directly in three groups of direction average mark of ground, is placed on every layer of plate of the coordinate registration die body, the coordinate registration die body midpoint
The position that source is placed is without departing from the FOV range of the PET-CT equipment, and the projection of multiple point source in vertical direction will not be mutual
Interference will not overlap.
Since each point source in the coordinate registration die body can both be detected by pet detector, can also be detected by CT
Device detects.The point source can be the F for being mixed with medical lipiodol18- FDG solution, then it includes six points that third image, which is one,
The 3-D image in source;Multiple CT images composition the 4th image can also be counted as one include six point sources three-dimensional figure
Picture.
For second of implementation, this step can be with are as follows:
Controlling terminal can detect the coordinate registration die body by CT detector, obtain the third figure of the coordinate registration die body
Picture, controlling terminal can detect the coordinate registration die body by pet detector, obtain the 4th image of the coordinate registration die body.
At this point it is possible to using the FOV coordinate system of third image as the first coordinate system, the i.e. corresponding FOV coordinate system of CT detector, by the 4th
The FOV coordinate system of image is as the second coordinate system, the i.e. corresponding FOV coordinate system of pet detector.
In the embodiment of the present invention, after controlling terminal determines the third image and the 4th image of coordinate registration die body, by with
Lower step 202 determines position of each point source in third image and the 4th image in coordinate registration die body.
Step 202: controlling terminal determines the 4-coordinate of each point source in the n point source in the third image, with
And determining the Five Axis of each point source in the 4th image, the 4-coordinate of each point source is each point source at this
Coordinate in first coordinate system, the Five Axis of each point source are coordinate of each point source in second coordinate system.
In this step, determine the point source in third image and the 4th figure by the barycentric coodinates of each point source of determination
Position as in, therefore, this step can be realized by following steps 2021-2024.
Step 2021: for each point source, controlling terminal determines the sphere where the point source in third image, and obtaining should
The pixel value and the 6th coordinate of each third pixel in sphere, the 6th coordinate of each third pixel are each third
Coordinate of the pixel in first coordinate system.
During the present invention is implemented, for multiple point sources that coordinate registration die body includes, controlling terminal passes through pet detector and CT
Detector separately detects the coordinate registration die body, and each point source in the coordinate registration die body is detected by pet detector
In the CT image that the point source is detected by CT detector in obtained position in PET image and the coordinate registration die body
Position corresponds.
In order to accurately determine in coordinate registration die body each point source respectively the 4-coordinate in the first coordinate system with
And the corresponding Five Axis in the second coordinate system of the point source, it need to be by default time of each point source in the coordinate registration die body
Sequence successively determines the 4-coordinate and Five Axis of point source, therefore, in this step, the point source institute is determined in third image
Sphere the step of can be with are as follows:
The order in multiple point sources for obtaining the point source, according to the order, in third image where the determining point source
Sphere.
Wherein, which can need to be arranged and change according to user, and the embodiment of the present invention does not limit this specifically
It is fixed.
It should be noted that controlling terminal determines time of the sphere in multiple point sources where each point source in third image
Sequence determines that the order of sphere in multiple point sources where each point source is consistent with controlling terminal in the 4th image.
For example, when the preset order can detect the coordinate registration die body for controlling terminal, it is more in the coordinate registration die body
A point source along the axial positive direction of the PET-CT equipment order.
Fig. 3 is the structure chart of the coordinate registration die body, as shown in figure 3, by taking coordinate registration die body includes six point sources as an example
It is illustrated, which includes the first transverse slat, the second transverse slat, third transverse slat and for fixing the first transverse slat, the
Point source 1 and point source 2 are arranged on the first transverse slat, is arranged on the second transverse slat for two transverse slats, the starting stave of third transverse slat and the second riser
Point source 5 and point source 6 are arranged on third transverse slat for point source 3 and point source 4.
Fig. 4 is the top view of the coordinate registration die body, and each point source is perpendicular to ground direction in the coordinate registration die body
Projective distribution it is as shown in Figure 4.
In the embodiment of the present invention, can by PET-CT equipment perpendicular to the direction definition of the plane where detector annulus
For axial direction, if by from the first transverse slat to the direction definition of third transverse slat being axial positive direction, the coordinate registration in top view
In the projection of die body in vertical direction, the projection of multiple point sources is non-interference, and each point source is along the PET- in multiple point sources
The axial positive direction of CT equipment is successively are as follows: 2 → 1 → 3 → 4 → 6 → 5;Therefore, controlling terminal is in third image and the 4th image
Sphere sequence in the middle multiple point sources of determination where each point source can be equal are as follows: 2 → 1 → 3 → 4 → 6 → 5.
In this step, in third image, first determine that the centre of sphere determines the practical body than the point source according to the centre of sphere
The bigger sphere of product, therefore, the step of controlling terminal determines the sphere where the point source in third image, can be with are as follows: control is eventually
End obtains the pixel value of each third pixel in third image, determines the maximum point of pixel value in multiple third pixels, with
The point is the centre of sphere, determines that a volume is greater than the sphere of the point source actual volume, and using the sphere as the ball where the point source
Body.
Wherein, controlling terminal determine a volume be greater than the point source actual volume sphere when, can by following steps,
Determine a sphere more bigger than the point source actual volume: controlling terminal determine the centre of sphere to the maximum distance in point source boundary,
According to the maximum distance, the pre-determined distance for being greater than the maximum distance is determined, which is determined as the half of the sphere
Diameter, and then determine a sphere more bigger than the actual volume of the point source.
Wherein, which can need to be arranged and change according to user, and the embodiment of the present invention does not do specific limit to this
It is fixed.For example, the pre-determined distance can increase the distance of unit length for the maximum distance, for example, if the maximum distance is 3 millis
Rice, the pre-determined distance can be 3.1 millimeters.It should be noted that controlling terminal is in third image where the determining point source
In the step of sphere, the implementation that one volume of the determination is greater than the sphere of the point source actual volume can be according to user's needs
Be arranged and change, the embodiment of the present invention to one volume of the determination be greater than the point source actual volume sphere implementation not
Make specific limit.
In this step, controlling terminal is obtained in the sphere after the pixel value of each third pixel, determines each third
Coordinate of the pixel in the first coordinate system, using the coordinate of the third pixel in the first coordinate system as the 6th coordinate.
Step 2022: controlling terminal passes through following formula according to the pixel value and the 6th coordinate of each third pixel
One (1) determines the 4-coordinate of the point source:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) sat for the 6th of each third pixel
Mark, PixelValuepFor the pixel value of each third pixel.
It should be noted that the 4-coordinate of the point source is the barycentric coodinates of sphere where the point source.
Step 2023: for each point source, controlling terminal determines the sphere where the point source in the 4th image, and obtaining should
The pixel value and the 9th coordinate of each 4th pixel in sphere, the 9th coordinate of each 4th pixel be this each 4th
Coordinate of the pixel in second coordinate system.
This step realizes that process is similar with step 2021, and details are not described herein.
Step 2024: controlling terminal passes through following formula according to the pixel value and the 9th coordinate of each 4th pixel
One (2) determine the Five Axis of the point source:
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) sat for the 9th of each 4th pixel
Mark, PixelValueqFor the pixel value of each 4th pixel.
This step realizes that process is similar with step 2022, and details are not described herein.
In turn, controlling terminal passes through following steps according to the 4-coordinate of third image and the Five Axis of the 4th image
203, determine the first transition matrix, which can be used for the 4-coordinate by third image in the first coordinate system
It is converted into the 7th coordinate of third image in the second coordinate system.
Step 203: controlling terminal determines the first conversion square according to the 4-coordinate and Five Axis of each point source
Battle array.
If third image is the PET image of pet detector detection, the 4th image is the CT image of CT detector detection,
Then the first transition matrix is the corresponding transition matrix of image that PET image is converted to the FOV coordinate system of CT detector.
If third image is the CT image of CT detector detection, the 4th image is the PET image of pet detector detection,
Then the first transition matrix is the corresponding transition matrix of image that CT image is converted to the FOV coordinate system of pet detector.
It is third figure by coordinate transformation of the third image in the first coordinate system by the first transition matrix in this step
When as coordinate in the second coordinate system, for each point source in coordinate registration die body, after controlling terminal is by determining conversion
The third image Five Axis of the 7th coordinate and the 4th image in the second coordinate system in the second coordinate system coordinate difference
It is different, further determine that the smallest first transition matrix of total coordinate difference so that multiple point sources.
Therefore, this step can be realized by following steps 2031-2033.
Step 2031: controlling terminal is according to the 4-coordinate of each point source, Five Axis and first transition matrix
Transformed representation determines the coordinate difference expression formula of each point source.
In the embodiment of the present invention, for each point source in coordinate registration die body, controlling terminal has been pre-defined third
Image is converted to the transformed representation of the 7th coordinate of third image in the second coordinate system in the 4-coordinate of the first coordinate system,
Then, by the transformed representation, the corresponding Five Axis of the point source in the 7th coordinate and the 4th image is further determined that
Coordinate difference.Therefore, this step can be realized by following steps 2031a-2031b.
Step 2031a: controlling terminal passes through following formula according to the 4-coordinate and the transformed representation of each point source
Two, determine that the 7th coordinate of each point source, the 7th coordinate of each point source are each point source in second coordinate system
Coordinate:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x4,y4,z4) be each point source 4-coordinate,
M is the transformed representation.
In the embodiment of the present invention, for each point source in coordinate registration die body, controlling terminal turns according in formula two
Expression formula is changed, 4-coordinate of the third image in the first coordinate system is converted into the 7th of third image in the second coordinate system
Coordinate.
When the corresponding conversion square of image that the first transition matrix is the FOV coordinate system that CT image is converted to pet detector
When battle array, which can be with are as follows: (x7,y7,z7)=MCT→PET×(x4,y4,z4), the MCT→PETIt indicates to detect CT image in CT
When coordinate in the FOV coordinate system of device is converted to coordinate of the CT image in the FOV coordinate system of pet detector, the first conversion square
The corresponding transformed representation of battle array;
When the corresponding conversion square of image that the first transition matrix is the FOV coordinate system that PET image is converted to CT detector
When battle array, which can be with are as follows: (x7,y7,z7)=MPET→CT×(x4,y4,z4), the MPET→CTIt indicates to visit PET image in PET
When the coordinate surveyed in the FOV coordinate system of device is converted to coordinate of the PET image in the FOV coordinate system of CT detector, the first conversion
The corresponding transformed representation of matrix.
In this step, 4-coordinate of the third image in the first coordinate system can be converted to second by rotation and translation
The 7th coordinate in coordinate system.
If each coordinate for including by the first coordinate system is rotating around x, y, z using the second coordinate system as reference frame
Axis rotates a certain angle, so that the x, y, z axis of the first coordinate system is parallel with the x, y, z axis of reference frame respectively, it then, will
Each coordinate that first coordinate system includes translates a certain distance along x, y, z axis direction respectively, so that the coordinate of the first coordinate system
Origin is Chong Die with the coordinate origin of reference frame, finally, so that the first coordinate system and the second coordinate system are consistent.
Wherein, each coordinate that the first coordinate system includes is denoted as respectively rotating around the angle of the rotation of x, y, z axisθ、
Each coordinate that first coordinate system includes is denoted as x along the distance that x, y, z axis direction translates respectively by ψ0、y0、z0, in this way, obtaining
4-coordinate of the third image in the first coordinate system is converted to the conversion parameter of the 7th coordinate in the second coordinate system, this turn
Changing parameter includes rotation parameterθ, ψ and translation parameters x0、y0、z0;Then by third image in the first coordinate system the 4th
The transformed representation M that coordinate is converted to the 7th coordinate in the second coordinate system can be indicated with the conversion parameter are as follows:
Due to the factors on test data such as experimental error and pet detector and CT detector resolution generate influence,
Cause actual experiment data that can not obtain the accurate solution for the multiple parameters that the matrix includes.Therefore, present invention implementation cannot root
The accurate solution of multiple parameter is directly obtained according to the formula two.
In the embodiment of the present invention, using the algorithm from wound, pass through following steps 2031b, there is shown the after conversion the 4th sits
Existing coordinate difference between Five Axis in mark and the second coordinate system.
Step 2031b: controlling terminal is according to the Five Axis and the 7th coordinate of each point source, by following formula three,
Determine the coordinate difference expression formula of each point source:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) be each point source Five Axis,
Δ x is the coordinate difference expression formula.
By step 2031a it is found that the relationship between the 7th coordinate and 4-coordinate can be indicated by following formula two are as follows:
Formula two: (x7,y7,z7)=M × (x4,y4,z4),
Therefore, formula two is substituted into formula three, obtains following formula four, i.e., the coordinate difference expression formula of each point source can be with
It indicates are as follows:
Formula four: Δ x=(x5,y5,z5)-M×(x4,y4,z4)。
When the corresponding conversion square of image that the first transition matrix is the FOV coordinate system that CT image is converted to pet detector
When battle array, which can be with are as follows: Δ xCT→PET=(x5,y5,z5)-MCT→PET×(x4,y4,z4), Δ xCT→PETFor by CT image
When the coordinate in the FOV coordinate system of CT detector is converted to coordinate of the CT image in the FOV coordinate system of pet detector, the
The corresponding coordinate difference expression formula of one transition matrix.
When the corresponding conversion square of image that the first transition matrix is the FOV coordinate system that PET image is converted to CT detector
When battle array, which can be with are as follows: Δ xPET→CT=(x5,y5,z5)-MPET→CT×(x4,y4,z4), Δ xPET→CTFor by PET image
When the coordinate in the FOV coordinate system of pet detector is converted to coordinate of the PET image in the FOV coordinate system of CT detector, the
The corresponding coordinate difference expression formula of one transition matrix.
Step 2032: controlling terminal determines total seat of multiple point source according to the coordinate difference expression formula of each point source
Mark differential expression formula.
In the embodiment of the present invention, by following formula five, i.e., total coordinate difference expression formula, further by coordinate registration die body
In total coordinate difference of multiple point sources be expressed as:
Wherein, n indicates the n point source that the coordinate registration die body includes, the dimension of j indicates coordinate axis, i.e., for indicating this
Three dimensions of the coordinate x, y, z axis in the first coordinate system and the second coordinate system respectively, Error indicate total coordinate difference table
Up to formula.
Therefore, by the formula five, define in coordinate registration die body the Five Axis of multiple point sources and the 7th coordinate it
Between difference.
Wherein, when the first transition matrix is corresponding turn of image of FOV coordinate system that CT image is converted to pet detector
When changing matrix, which can be with are as follows:The ErrorCT→PETFor by CT image
It is right when the coordinate in the FOV coordinate system of CT detector is converted to coordinate of the CT image in the FOV coordinate system of pet detector
The total coordinate difference expression formula answered.
When the corresponding conversion square of image that the first transition matrix is the FOV coordinate system that PET image is converted to CT detector
When battle array, which can be with are as follows:The ErrorPET→CTFor PET image is existed
It is corresponding when coordinate in the FOV coordinate system of pet detector is converted to coordinate of the PET image in the FOV coordinate system of CT detector
Total coordinate difference expression formula.
Step 2033: controlling terminal determines first conversion for meeting preset condition according to total coordinate difference expression formula
Matrix, the preset condition are that the value of total coordinate difference expression formula is minimum value.
In this step, according to total coordinate difference expression formula formula five, the coordinate difference expression formula of each point source is substituted into
Total coordinate difference expression formula, that is, substitute into formula four in formula five, then, by each point source determined in step 202 in third
The 4-coordinate and the point source of image successively substitute into the formula five in the Five Axis of the 4th image, determine that one makes this total
The smallest transition matrix of value of coordinate difference expression formula determines the smallest conversion of value so that total coordinate difference expression formula
Each conversion parameter in matrix, in turn, using the transition matrix as the first transition matrix.
Specific implementation process are as follows: by each point source in the 4-coordinate of third image and the point source the of the 4th image
Five Axis successively substitutes into the formula five, reasonable model on the basis of the first transition matrix initial value, required by physical process
In enclosing, suitable step-length is chosen, the value of total coordinate difference expression formula is successively calculated, therefrom chosen so that coordinate difference expression formula
Value corresponding transition matrix, optimal solution as the first transition matrix when minimum.
Wherein, the first transition matrix initial value can be with are as follows:
In this way, controlling terminal can be according to first transition matrix, by fourth seat of the third image in the first coordinate system
Mark is converted to the 7th coordinate of the third image in the second coordinate system.
In the embodiment of the present invention, controlling terminal stores the numerical value of each of the first transition matrix conversion parameter, is convenient for
It is subsequent using controlling terminal when, according to first transition matrix, by PET-CT the first image and the second image merge.
In a kind of possible design of the embodiment of the present invention, due to can store target in the CT image of CT detector detection
The attenuation coefficient of object, controlling terminal first can carry out correction for attenuation to PET image by the attenuation coefficient, then, further according to
Revised PET image is converted to the image in the FOV coordinate system of CT detector by the first transition matrix, and then will be after amendment
PET image and CT image merged, further improve the accuracy of fused PET-CT image.
Therefore, when the first transition matrix is corresponding turn of image of FOV coordinate system that PET image is converted to CT detector
When changing matrix, also need to determine that the image for the FOV coordinate system that CT image is converted to pet detector is corresponding by following steps a
Transition matrix, i.e. the second transition matrix.
Step a: controlling terminal obtains the second transition matrix according to first transition matrix, which is used for
The 4th pixel of each of 4th image is converted in first coordinate system in the coordinate in second coordinate system
Coordinate.
In this step, if first transition matrix can be MPET→CT, at this point, due to needing according to the 4th after conversion
The image data of the 4th pixel of each of image, determines attenuation coefficient, therefore, also needs to obtain the second transition matrix MCT→PET,
Will the 4th pixel of each of the 4th image be converted in first coordinate system in the coordinate in second coordinate system
Coordinate transition matrix.
By formula two: (x7,y7,z7)=MPET→CT×(x4,y4,z4) it is found that first transition matrix and the second transition matrix
With orthogonality, the orthogonality of first transition matrix and the second transition matrix can be indicated by following formula six are as follows:
Formula six: MCT→PET=MPET→CT -1,
Therefore, controlling terminal determines the second transition matrix M directly according to the formula sixCT→PETIn each parameter value be
It can.
In the embodiment of the present invention, controlling terminal stores the numerical value of each of the second transition matrix conversion parameter, is convenient for
When subsequent use PET-CT equipment, according to second transition matrix, CT image is converted in the FOV coordinate system of pet detector
Image, and according to the CT image after the conversion to PET image carry out correction for attenuation.
In the embodiment of the present invention, after controlling terminal detection coordinate registration die body obtains third image and the 4th image, pass through
The Five Axis and transformed representation of the 4-coordinate of third image, the 4th image define each point source in multiple point sources and sit
Differential expression formula is marked, and then determines that total coordinate difference expression formula of multiple point sources is determined according to total coordinate difference expression formula
So that the smallest first transition matrix of the value of total coordinate difference expression formula;Since the embodiment of the present invention is not only according to each
The conversion expression and 4-coordinate, Five Axis of point source directly obtain the first transition matrix, but have pre-defined multiple
Total coordinate difference expression formula of point source, substitutes into total coordinate difference expression formula for 4-coordinate, Five Axis, determines the first conversion
Matrix, moreover, first transition matrix makes the value of total coordinate difference expression formula minimum, therefore, the embodiment of the present invention makes
Coordinate difference is minimum after conversion, to improve image co-registration precision.In PET/CT system, not only improves and improve lesion localization
Accuracy is also beneficial to the precision that PET image carries out correction for attenuation using CT image.
The embodiment of the invention provides a kind of method of multi-modal detection system image co-registration, this method utilizes above-mentioned implementation
The first transition matrix that example obtains, merges image.The executing subject of this method can be that controlling terminal should referring to Fig. 5
Method includes:
Step 301: controlling terminal detects target object by multi-modal detection system, obtains the first image and the second figure
Picture, the visual field FOV coordinate system of first image are the first coordinate system, and the FOV coordinate system of second image is the second coordinate system.
In this step, controlling terminal may include pet detector and CT detector, and therefore, this step can be by following
First way or the second way are realized.
For the first implementation, this step can be with are as follows:
Controlling terminal detects the target object by pet detector, obtains the first image of the target object, controlling terminal
The target object is detected by CT detector, obtains the second image of the target object.
Wherein, using the FOV coordinate system of the first image as the first coordinate system, the i.e. corresponding FOV coordinate system of pet detector,
Using the FOV coordinate system of the second image as the second coordinate system, the i.e. corresponding FOV coordinate system of CT detector.
It should be noted that controlling terminal detects the target object by pet detector, the first of the target object is obtained
Image, controlling terminal detect the target object by CT detector, and the implementation for obtaining the second image of the target object is same
The implementation of step 201 is consistent, no longer repeats one by one herein.
For second of implementation, this step can be with are as follows:
The step of controlling terminal detects target object by multi-modal detection system, obtains the first image and the second image is also
It can be with are as follows: controlling terminal detects the target object by CT detector, obtains the first image of the target object, and controlling terminal is logical
It crosses pet detector and detects the target object, obtain the second image of the target object.
At this point, using the FOV coordinate system of the first image as the first coordinate system, the i.e. corresponding FOV coordinate system of CT detector, it will
The FOV coordinate system of second image is as the second coordinate system, the i.e. corresponding FOV coordinate system of pet detector.
Step 302: controlling terminal obtains the first transition matrix, which is used for will be every in first image
A first pixel is converted to the coordinate in second coordinate system in the coordinate in first coordinate system.
In the embodiment of the present invention, controlling terminal first passes through step 201-203 in advance, it is determined that the first conversion square of controlling terminal
Battle array, and each conversion parameter in first transition matrix is stored, therefore, in this step, PET-CT directly acquires this first turn
Change matrix.
It should be noted that first transition matrix can convert 4-coordinate of the third image in the first coordinate system
Corresponding for the 7th coordinate of the third image in the second coordinate system, in this step, which is used for the
Coordinate of one image in the first coordinate system is converted to the coordinate of the first image in the second coordinate system.
In a kind of possible design of the embodiment of the present invention, after controlling terminal obtains the first transition matrix, can directly it hold
First coordinate of the first pixel each in the first image is converted to each of first image first by row following steps 303
The second coordinate of pixel in the second coordinate system.
In a kind of possible design of the embodiment of the present invention, which can pass through pet detector for controlling terminal
The PET image of detection, then the second image is the CT image that controlling terminal is detected by CT detector, at this point, due in CT image
The attenuation coefficient of target object is stored, controlling terminal first can carry out correction for attenuation to PET image by the attenuation coefficient,
Then, then pass through following steps 303-304, PET image is converted into the image in the FOV coordinate system of CT detector, and then will
PET image and CT image after conversion are merged.
Therefore, before controlling terminal executes step 303, following steps b-c is first passed through, the first image is corrected.
Step b: controlling terminal determines that this is every according to the third coordinate of second transition matrix and each second pixel
8th coordinate of a second pixel, the 8th coordinate of each second pixel are each second pixel in first seat
Coordinate in mark system.
In this step, if first transition matrix can be MPET→CT, at this point, due to needing according to second after conversion
The image data of the second pixel of each of image, determines attenuation coefficient, therefore, also needs to obtain the second transition matrix MCT→PET,
Second transition matrix is for being converted to coordinate of the second pixel of each of second image in second coordinate system
Coordinate in first coordinate system.
In this step, since controlling terminal passes through step a, it is determined that the second transition matrix, therefore, controlling terminal directly obtains
The second transition matrix is taken, according to second transformed matrix MCT→PET, the conversion parameter in second transition matrix is determined, according to this
Conversion parameter in second transition matrix, respectively by the third coordinate of the second pixel of each of second image be converted to this
Eightth coordinate of two pixels in the first coordinate system.
Step c: controlling terminal is according to the first of the 8th coordinate of each second pixel and each first pixel
Coordinate determines the attenuation coefficient of each first pixel from second image;Controlling terminal is according to each first pixel
The attenuation coefficient of point, is corrected first image.
In the embodiment of the present invention, when detecting target object by CT detector due to controlling terminal, which can be with
Target object is scanned by X-ray beam, therefore, declining for X-ray can be obtained from the image data of second image
Subtract coefficient, the i.e. attenuation coefficient of the first pixel;Due to by step b, by the of the second pixel of each of second image
After three coordinates are converted to the 8th coordinate of second pixel in the first coordinate system, sat according to the 8th of second pixel the
Mark, can determine attenuation coefficient of the target object in the first image at corresponding second pixel position, i.e. the first image
In the first pixel attenuation coefficient.
Correspondingly, for each second pixel, controlling terminal is according to the 8th coordinate of second pixel, from the second figure
Determine that the 8th coordinate position corresponds to attenuation coefficient as in.
Wherein, controlling terminal is according to the attenuation coefficient of each first pixel, to the corrected step of the first image
It suddenly can be with are as follows:
Controlling terminal is according to the attenuation coefficient of the first pixel each in first image, in first image each
The image data of one pixel is corrected, by first after the image data composition correction of the first pixel each of after correction
Image.
Step 303: for controlling terminal according to first coordinate of first transition matrix and each first pixel, determining should
Second coordinate of each first pixel, the first coordinate of each first pixel be each first pixel this first
Coordinate in coordinate system, the second coordinate of each first pixel are each first pixel in second coordinate system
Coordinate.
In the embodiment of the present invention, the first pixel is the pixel in the first image, and the second pixel is each second figure
Pixel as in, first pixel corresponding coordinate in the first coordinate system is the first coordinate.
In this step, controlling terminal makees each conversion parameter according to each conversion parameter in the first transition matrix
For first coordinate of each first pixel in the first coordinate system, each first pixel is obtained in the second coordinate system
The second coordinate.
Step 304: controlling terminal is according to each of the second coordinate of each first pixel and second image
First image and second image are merged, obtain multi-modal detection image by the third coordinate of two pixels, this is each
The third coordinate of second pixel is the coordinate of each second pixel in second coordinate system.
In the embodiment of the present invention, corresponding coordinate is each of the second image includes the second pixel in the second coordinate system
Third coordinate.
This step can be with are as follows: controlling terminal is according to the second coordinate of the first pixel of each of first image, from
The image data that each first pixel is determined in one image, sits according to the third of the second pixel of each of second image
Mark determines the image data of each second pixel from the second image, according to the second coordinate of each first pixel and
The third coordinate of the second pixel of each of second image, by the image data of each first pixel and each second picture
The image data of vegetarian refreshments is merged, and mode detection image, such as PET-CT image are obtained.
In a kind of possible design of the embodiment of the present invention, if controlling terminal carries out the first image by step b-c
Correction, that is, when the multi-modal detection system is PET-CT detection system, which is PET image, which is CT
Image, then this step can be with are as follows:
Controlling terminal is according to the second pixel of each of the second coordinate of each first pixel and second image
Third coordinate, the first image after second image and the correction is merged, multi-modal detection image is obtained, this is each
The third coordinate of second pixel is the coordinate of each second pixel in second coordinate system.
Controlling terminal is according to the second coordinate of the first pixel of each of the first image after the correction, after correction
The image data that each first pixel is determined in first image, according to the third of the second pixel of each of second image
Coordinate determines the image data of each second pixel, according to the second coordinate of each first pixel from the second image
With the third coordinate of the second pixel of each of second image, by the image data of each first pixel and each second
The image data of pixel is merged, and mode detection image namely PET-CT image are obtained.
In the embodiment of the present invention, after controlling terminal detects the first image and the second image of target object, first is obtained
Transition matrix, and according to the first coordinate of the first pixel each in first transition matrix and the first image, by this each
First coordinate of one pixel is converted to second coordinate of each first pixel in the second coordinate system, and then, control is eventually
The third coordinate further according to the second pixel of each of the second coordinate of each first pixel and second image is held, it will
First image and second image co-registration are multi-modal detection image, due to first transition matrix be will be in the first image
First coordinate of each first pixel is converted to the conversion of second coordinate of each first pixel in the second coordinate system
Matrix, it therefore reduces the error between the first coordinate system of the first image and the second coordinate system of the second image, further mentions
It is high by the first image and the second image co-registration when accuracy.
The present invention provides a kind of device of multi-modal detection system image co-registration in implementing, referring to Fig. 6, the device packet
It includes:
Detecting module 401 obtains the first image and the second figure for detecting target object by multi-modal detection system
Picture, the visual field FOV coordinate system of first image are the first coordinate system, and the FOV coordinate system of second image is the second coordinate system;
First obtains module 402, and for obtaining the first transition matrix, which is used for will be in first image
Each of the first pixel be converted to the coordinate in second coordinate system in the coordinate in first coordinate system;
First determining module 403, for the first coordinate according to first transition matrix and each first pixel, really
Second coordinate of fixed each first pixel, the first coordinate of each first pixel are each first pixel at this
Coordinate in first coordinate system, the second coordinate of each first pixel are each first pixel in second coordinate system
In coordinate;
Fusion Module 404, for according to each of the second coordinate of each first pixel and second image
First image and second image are merged, obtain multi-modal detection image by the third coordinate of two pixels, this is each
The third coordinate of second pixel is the coordinate of each second pixel in second coordinate system.
In a kind of possible design, the first acquisition module 402, comprising:
Probe unit obtains third image and the 4th for detecting coordinate registration die body by the multi-modal detection system
Image, the coordinate registration die body include n point source, which is the object that can be detected by the multi-modal detection system,
The object can be the F for being mixed with medical lipiodol18- FDG solution or solid-state point source, the FOV coordinate system of the third image are the
One coordinate system, the FOV coordinate system of the 4th image are the second coordinate system, and n is the integer more than or equal to 4;
Determination unit, for determining the 4-coordinate of each point source in the n point source in the third image, and really
The Five Axis of each point source in fixed 4th image, the 4-coordinate of each point source be each point source this first
Coordinate in coordinate system, the Five Axis of each point source are coordinate of each point source in second coordinate system;It is every according to this
The 4-coordinate and Five Axis of a point source determine first transition matrix.
In a kind of possible design, the determination unit, comprising:
First determines subelement, for determining the sphere where the point source in the third image for each point source,
The pixel value and the 6th coordinate of each third pixel in the sphere are obtained, the 6th coordinate of each third pixel is that this is every
Coordinate of a third pixel in first coordinate system;
Second determines that subelement passes through following public affairs for the pixel value and the 6th coordinate according to each third pixel
Formula one (1) determines the 4-coordinate of the point source:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) sat for the 6th of each third pixel
Mark, PixelValuepFor the pixel value of each third pixel.
In a kind of possible design, the determination unit, comprising:
Third determines subelement, for determining the sphere where the point source in the 4th image for each point source,
The pixel value and the 9th coordinate of each 4th pixel in the sphere are obtained, the 9th coordinate of each 4th pixel is that this is every
Coordinate of a 4th pixel in second coordinate system;
4th determines that subelement passes through following public affairs for the pixel value and the 9th coordinate according to each 4th pixel
Formula one (2) determines the Five Axis of the point source:
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) sat for the 9th of each 4th pixel
Mark, PixelValueqFor the pixel value of each 4th pixel.
In a kind of possible design, the determination unit, comprising:
5th determines subelement, for the 4-coordinate, Five Axis and first transition matrix according to each point source
Transformed representation, determine the coordinate difference expression formula of each point source;
6th determines that subelement determines the total of multiple point source for the coordinate difference expression formula according to each point source
Coordinate difference expression formula;
7th determines subelement, for determining this first turn for meeting preset condition according to total coordinate difference expression formula
Matrix is changed, which is that the value of total coordinate difference expression formula is minimum value.
In a kind of possible design, the 5th determines subelement, be also used to according to the 4-coordinate of each point source and
The transformed representation determines the 7th coordinate of each point source, the 7th coordinate of each point source is should by following formula two
Coordinate of each point source in second coordinate system:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
5th determines subelement, is also used to the Five Axis and the 7th coordinate according to each point source, passes through following public affairs
Formula three determines the coordinate difference expression formula of each point source:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) be each point source Five Axis,
(x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference expression formula.
In a kind of possible design, which is that positron emission tomography-computerized tomography is swept
PET-CT detection system is retouched, which is the PET image of pet detector detection, which is the detection of CT detector
CT image, the device further include:
Second obtains module, and for obtaining the second transition matrix according to first transition matrix, which is used
In being converted to the second pixel of each of second image in first coordinate system in the coordinate in second coordinate system
Coordinate;
Second determining module is determined for the third coordinate according to second transition matrix and each second pixel
8th coordinate of each second pixel, the 8th coordinate of each second pixel be each second pixel this
Coordinate in one coordinate system;
Third determining module, for according to the of the 8th coordinate of each second pixel and each first pixel
One coordinate determines the attenuation coefficient of each first pixel from second image;
Correction module is corrected first image for the attenuation coefficient according to each first pixel.
In the embodiment of the present invention, after controlling terminal detects the first image and the second image of target object, first is obtained
Transition matrix, and according to the first coordinate of the first pixel each in first transition matrix and the first image, by this each
First coordinate of one pixel is converted to second coordinate of each first pixel in the second coordinate system, and then, control is eventually
The third coordinate further according to the second pixel of each of the second coordinate of each first pixel and second image is held, it will
First image and second image co-registration are multi-modal detection image, due to first transition matrix be will be in the first image
First coordinate of each first pixel is converted to the conversion of second coordinate of each first pixel in the second coordinate system
Matrix, it therefore reduces the error between the first coordinate system of the first image and the second coordinate system of the second image, further mentions
It is high by the first image and the second image co-registration when accuracy.
It should be understood that the device of multi-modal detection system image co-registration provided by the above embodiment is in multi-modal detection
When system image co-registration, only the example of the division of the above functional modules, in practical application, can according to need and
Above-mentioned function distribution is completed by different functional modules, i.e., the internal structure of device is divided into different functional modules, with
Complete all or part of function described above.In addition, multi-modal detection system image co-registration provided by the above embodiment
The embodiment of the method for device and multi-modal detection system image co-registration belongs to same design, and specific implementation process is detailed in method reality
Example is applied, which is not described herein again.
Referring to Fig. 7, the embodiment of the invention provides a kind of controlling terminals 500.The processing terminal 500 is for implementing above-mentioned reality
The method that the multi-modal detection system image co-registration provided in example is provided.Specifically:
Processing terminal 500 may include processor 510, transceiver 520, memory 530, input unit 550, display unit
550, the components such as voicefrequency circuit 560 and power supply 570, as shown in fig. 7, it will be understood by those skilled in the art that shown in Fig. 7
The restriction of terminal structure not structure paired terminal, may include than illustrating more or fewer components or the certain components of combination,
Or different component layout.Wherein:
Processor 510 can be the control centre of terminal, utilize each of various interfaces and the entire terminal device of connection
A part, such as transceiver 520 and memory 530, by run or execute the software program being stored in memory 530 and/
Or module, and the data being stored in memory 530 are called, the various functions and processing data of processing terminal 500 are executed, from
And integral monitoring is carried out to processing terminal 500.Optionally, processor 510 may include one or more processing cores.In the present invention
In, processor 510 is determined for the relevant treatment of gate-control signal.Transceiver 520 can be used for sending and receiving data, eventually
End can send and receive data by transceiver 520, and terminal can be net by internet sending and receiving data, transceiver
Card.
Memory 530 can be used for storing software program and module, and processor 510 is stored in memory 530 by operation
Software program and module, thereby executing various function application and data processing.Memory 530 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as determine gate-control signal function) etc.;Storage data area, which can be stored, uses created data (such as the vanishing point that falls into oblivion according to terminal
Location information etc.) etc..In addition, memory 530 may include high-speed random access memory, it can also include non-volatile memories
Device, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Input unit 540 can be with
Number for receiving input or character information, and generate keyboard related with user setting and function control, mouse, behaviour
Make bar, optics or trackball signal input.Display unit 550 is displayed for information input by user or is supplied to use
The information at family and the various graphical user interface of terminal, these graphical user interface can be by figure, text, icon, videos
It is constituted with any combination thereof.Display unit 550 may include display panel 551, optionally, can use LCD (Liquid
Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)
Etc. forms configure display panel 551.Voicefrequency circuit 560, loudspeaker 561, microphone 562 can provide between user and terminal
The audio data received can be converted to electric signal by audio interface, voicefrequency circuit 560.Power supply 570 can pass through power management
System and processor 510 are logically contiguous, to realize management charging, electric discharge and power managed etc. by power-supply management system
Function.Power supply 570 can also include one or more direct current or AC power source, recharging system, power failure monitor
The random components such as circuit, power adapter or inverter, power supply status indicator.
Specifically in embodiments of the present invention, processing terminal 500 further includes having memory and one or more than one
Program, perhaps more than one program is stored in memory and is configured to by one or more than one processing for one of them
Device executes.Said one or more than one program include the instruction for performing the following operation:
Target object is detected by multi-modal detection system, obtains the first image and the second image, the view of first image
Wild FOV coordinate system is the first coordinate system, and the FOV coordinate system of second image is the second coordinate system.
The first transition matrix is obtained, which is used for the first pixel of each of first image first
Coordinate in coordinate system is converted to coordinate in the second coordinate system.
According to first coordinate of first transition matrix and each first pixel, the of each first pixel is determined
Two coordinates, the first coordinate of each first pixel are the coordinate of each first pixel in the first coordinate system, this is every
Second coordinate of a first pixel is the coordinate of each first pixel in the second coordinate system.
It is sat according to the third of the second pixel of each of the second coordinate of each first pixel and second image
First image and second image are merged, obtain multi-modal detection image, the third of each second pixel by mark
Coordinate is the coordinate of each second pixel in second coordinate system.
In a kind of possible design, first transition matrix of acquisition, comprising:
Coordinate registration die body is detected by the multi-modal detection system, obtains third image and the 4th image, which matches
Quasi-mode body includes n point source, which is the object that can be detected by the multi-modal detection system, which can be
It is mixed with the F of medical lipiodol18- FDG solution or solid-state point source, the FOV coordinate system of the third image are the first coordinate system, this
The FOV coordinate system of four images is the second coordinate system, and n is the integer more than or equal to 4;
It determines the 4-coordinate of each point source in n point source in the third image, and determines in the 4th image
Each point source Five Axis, the 4-coordinate of each point source is the coordinate of each point source in the first coordinate system, should
The Five Axis of each point source is coordinate of each point source in the second coordinate system;
According to the 4-coordinate and Five Axis of each point source, the first transition matrix is determined.
In a kind of possible design, the 4-coordinate of each point source in n point source in the determination third image,
Include:
For each point source, the sphere where the point source is determined in the third image, obtains each third in the sphere
The pixel value of pixel and the 6th coordinate, the 6th coordinate of each third pixel are that each third pixel is sat first
Coordinate in mark system;
The point source is determined by following formula one (1) according to the pixel value of each third pixel and the 6th coordinate
4-coordinate:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) sat for the 6th of each third pixel
Mark, PixelValuepFor the pixel value of each third pixel.
In a kind of possible design, the Five Axis of each point source in the 4th image of determination, comprising:
For each point source, the sphere where the point source is determined in the 4th image, is obtained each the in the sphere
The pixel value of four pixels and the 9th coordinate, the 9th coordinate of each 4th pixel be each 4th pixel this
Coordinate in two coordinate systems;
The point source is determined by following formula one (2) according to the pixel value and the 9th coordinate of each 4th pixel
Five Axis:
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) sat for the 9th of each 4th pixel
Mark, PixelValueqFor the pixel value of each 4th pixel.
In a kind of possible design, which determines this first turn
Change matrix, comprising:
According to the 4-coordinate of each point source, the transformed representation of Five Axis and first transition matrix, determine each
The coordinate difference expression formula of point source;
According to the coordinate difference expression formula of each point source, total coordinate difference expression formula of multiple point sources is determined;
According to total coordinate difference expression formula, first transition matrix for meeting preset condition is determined, which is
The value of total coordinate difference expression formula is minimum value.
In a kind of possible design, 4-coordinate, Five Axis and first transition matrix according to each point source
Transformed representation, determine the coordinate difference expression formula of each point source, comprising:
The of each point source is determined by following formula two according to the 4-coordinate of each point source and the transformed representation
7th coordinate of seven coordinates, each point source is the coordinate of each point source in the second coordinate system:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
The coordinate of each point source is determined by following formula three according to the Five Axis and the 7th coordinate of each point source
Differential expression formula:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) be each point source Five Axis,
(x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference expression formula.
In a kind of possible design, which is that positron emission tomography-computerized tomography is swept
PET-CT detection system is retouched, which is the PET image of pet detector detection, which is the detection of CT detector
CT image, which determines each first pixel
The second coordinate before, this method further include:
According to first transition matrix, the second transition matrix is obtained, which is used for will be in second image
Each of the coordinate of the second pixel in the second coordinate system be converted to the coordinate in the first coordinate system;
According to the third coordinate of second transition matrix and each second pixel, the 8th of each second pixel is determined
Coordinate, the 8th coordinate of each second pixel are coordinate of each second pixel in the first coordinate system;
According to the first coordinate of the 8th coordinate of each second pixel and each first pixel, from the second image
The attenuation coefficient of middle each first pixel of determination;
According to the attenuation coefficient of each first pixel, the first image is corrected.
In the embodiment of the present invention, after controlling terminal detects the first image and the second image of target object, first is obtained
Transition matrix, and according to the first coordinate of the first pixel each in first transition matrix and the first image, by this each
First coordinate of one pixel is converted to second coordinate of each first pixel in the second coordinate system, and then, control is eventually
The third coordinate further according to the second pixel of each of the second coordinate of each first pixel and second image is held, it will
First image and second image co-registration are multi-modal detection image, due to first transition matrix be will be in the first image
First coordinate of each first pixel is converted to the conversion of second coordinate of each first pixel in the second coordinate system
Matrix, it therefore reduces the error between the first coordinate system of the first image and the second coordinate system of the second image, further mentions
It is high by the first image and the second image co-registration when accuracy.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and
Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of method of multi-modal detection system image co-registration, which is characterized in that the described method includes:
Target object is detected by multi-modal detection system, obtains the first image and the second image, the visual field of the first image
FOV coordinate system is the first coordinate system, and the FOV coordinate system of second image is the second coordinate system;
The first transition matrix is obtained, first transition matrix is used for the first pixel of each of the first image in institute
It states the coordinate in the first coordinate system and is converted to the coordinate in second coordinate system;
According to the first coordinate of first transition matrix and each first pixel, each first pixel is determined
The second coordinate, the first coordinate of each first pixel is each first pixel in first coordinate system
Coordinate, the second coordinate of each first pixel is seat of each first pixel in second coordinate system
Mark;
It is sat according to the third of the second pixel of each of the second coordinate of each first pixel and second image
Mark, the first image and second image are merged, multi-modal detection image, each second pixel are obtained
Third coordinate be coordinate of each second pixel in second coordinate system;
The first transition matrix of the acquisition, comprising:
Coordinate registration die body is detected by the multi-modal detection system, obtains third image and the 4th image, the coordinate is matched
Quasi-mode body includes n point source, and the n point source is the object that can be detected by the multi-modal detection system, the object
It can be the F for being mixed with medical lipiodol18- FDG solution or solid-state point source, the FOV coordinate system of the third image are the first seat
Mark system, the FOV coordinate system of the 4th image are the second coordinate system, and n is the integer more than or equal to 4;
It determines the 4-coordinate of each point source in the n point source in the third image, and determines the 4th figure
The Five Axis of each point source as in, the 4-coordinate of each point source are that each point source is sat described first
Coordinate in mark system, the Five Axis of each point source are coordinate of each point source in second coordinate system;
According to the 4-coordinate and Five Axis of each point source, first transition matrix is determined;
The 4-coordinate of each point source in the n point source in the determination third image, comprising:
For each point source, the sphere where the point source is determined in the third image, is obtained every in the sphere
The pixel value and the 6th coordinate of a third pixel, the 6th coordinate of each third pixel are each third pixel
Coordinate of the point in first coordinate system;
The point source is determined by following formula one (1) according to the pixel value and the 6th coordinate of each third pixel
4-coordinate:
Formula one (1):
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) sat for the 6th of each third pixel
Mark, PixelValuepFor the pixel value of each third pixel.
2. the method according to claim 1, wherein each point source in the determination the 4th image
Five Axis, comprising:
For each point source, the sphere where the point source is determined in the 4th image, is obtained every in the sphere
The pixel value and the 9th coordinate of a 4th pixel, the 9th coordinate of each 4th pixel are each 4th pixel
Coordinate of the point in second coordinate system;
The point source is determined by following formula one (2) according to the pixel value and the 9th coordinate of each 4th pixel
Five Axis:
Formula one (2):
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) sat for the 9th of each 4th pixel
Mark, PixelValueqFor the pixel value of each 4th pixel.
3. the method according to claim 1, wherein the 4-coordinate and the 5th according to each point source
Coordinate determines first transition matrix, comprising:
According to the 4-coordinate of each point source, the transformed representation of Five Axis and first transition matrix, institute is determined
State the coordinate difference expression formula of each point source;
According to the coordinate difference expression formula of each point source, total coordinate difference expression formula of the n point source is determined;
According to total coordinate difference expression formula, first transition matrix for meeting preset condition, the preset condition are determined
Value for total coordinate difference expression formula is minimum value.
4. according to the method described in claim 3, it is characterized in that, the 4-coordinate according to each point source, the 5th
The transformed representation of coordinate and first transition matrix determines the coordinate difference expression formula of each point source, comprising:
Each point is determined by following formula two according to the 4-coordinate and the transformed representation of each point source
7th coordinate of the 7th coordinate in source, each point source is coordinate of each point source in second coordinate system:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
The seat of each point source is determined by following formula three according to the Five Axis and the 7th coordinate of each point source
Mark differential expression formula:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) be each point source Five Axis,
(x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference expression formula.
5. the method according to claim 1, wherein the multi-modal detection system is aobvious for positron emission fault
Picture-computed tomography PET-CT detection system, the PET image that the first image detects for pet detector, described second
Image is the CT image of CT detector detection, described according to the first of first transition matrix and each first pixel
Coordinate, before the second coordinate for determining each first pixel, the method also includes:
According to first transition matrix, the second transition matrix is obtained, second transition matrix is used for second image
Each of the second pixel be converted to the coordinate in first coordinate system in the coordinate in second coordinate system;
According to the third coordinate of second transition matrix and each second pixel, each second pixel is determined
The 8th coordinate, the 8th coordinate of each second pixel is each second pixel in first coordinate system
Coordinate;
According to the first coordinate of the 8th coordinate of each second pixel and each first pixel, from described second
The attenuation coefficient of each first pixel is determined in image;
According to the attenuation coefficient of each first pixel, the first image is corrected.
6. a kind of device of multi-modal detection system image co-registration, which is characterized in that described device includes:
Detecting module, for obtaining the first image and the second image by multi-modal detection system detection target object, described the
The visual field FOV coordinate system of one image is the first coordinate system, and the FOV coordinate system of second image is the second coordinate system;
First obtains module, and for obtaining the first transition matrix, first transition matrix is used for will be in the first image
Each first pixel is converted to the coordinate in second coordinate system in the coordinate in first coordinate system;
First determining module is determined for the first coordinate according to first transition matrix and each first pixel
Second coordinate of each first pixel, the first coordinate of each first pixel are each first pixel
Coordinate in first coordinate system, the second coordinate of each first pixel are each first pixel in institute
State the coordinate in the second coordinate system;
Fusion Module, for according to the second picture of each of the second coordinate of each first pixel and second image
The third coordinate of vegetarian refreshments merges the first image and second image, obtains multi-modal detection image, described every
The third coordinate of a second pixel is coordinate of each second pixel in second coordinate system;
Described first obtains module, comprising:
Probe unit obtains third image and the 4th figure for detecting coordinate registration die body by the multi-modal detection system
Picture, the coordinate registration die body include n point source, and the n point source can be detected by the multi-modal detection system
Object, the object can be the F for being mixed with medical lipiodol18The FOV of-FDG solution or solid-state point source, the third image is sat
Mark system is the first coordinate system, and the FOV coordinate system of the 4th image is the second coordinate system, and n is the integer more than or equal to 4;
Determination unit, for determining the 4-coordinate of each point source in the n point source in the third image, and really
The Five Axis of each point source in fixed 4th image, the 4-coordinate of each point source are each point source
Coordinate in first coordinate system, the Five Axis of each point source are each point source in second coordinate system
Coordinate;According to the 4-coordinate and Five Axis of each point source, first transition matrix is determined;
The determination unit, comprising:
First determines subelement, for determining the ball where the point source in the third image for each point source
Body, obtains the pixel value and the 6th coordinate of each third pixel in the sphere, and the 6th of each third pixel sits
It is designated as coordinate of each third pixel in first coordinate system;
Second determines that subelement passes through following formula for the pixel value and the 6th coordinate according to each third pixel
One, determine the 4-coordinate of the point source:
Formula one:
Wherein, (x4,y4,z4) be the point source 4-coordinate, (x6,y6,z6) sat for the 6th of each third pixel
Mark, PixelValuepFor the pixel value of each third pixel.
7. device according to claim 6, which is characterized in that the determination unit, comprising:
Third determines subelement, for determining the ball where the point source in the 4th image for each point source
Body, obtains the pixel value and the 9th coordinate of each 4th pixel in the sphere, and the 9th of each 4th pixel sits
It is designated as coordinate of each 4th pixel in second coordinate system;
4th determines that subelement passes through following formula for the pixel value and the 9th coordinate according to each 4th pixel
One (2) determine the Five Axis of the point source:
Formula one (2):
Wherein, (x5,y5,z5) be the point source Five Axis, (x9,y9,z9) sat for the 9th of each 4th pixel
Mark, PixelValueqFor the pixel value of each 4th pixel.
8. device according to claim 6, which is characterized in that the determination unit, comprising:
5th determines subelement, for 4-coordinate, Five Axis and first transition matrix according to each point source
Transformed representation, determine the coordinate difference expression formula of each point source;
6th determines that subelement determines total seat of the n point source for the coordinate difference expression formula according to each point source
Mark differential expression formula;
7th determines subelement, for determining described first turn for meeting preset condition according to total coordinate difference expression formula
Matrix is changed, the preset condition is that the value of total coordinate difference expression formula is minimum value.
9. device according to claim 8, which is characterized in that the described 5th determines subelement, is also used to according to described every
The 4-coordinate and the transformed representation of a point source determine the 7th coordinate of each point source, institute by following formula two
The 7th coordinate for stating each point source is coordinate of each point source in second coordinate system:
Formula two: (x7,y7,z7)=M × (x4,y4,z4)
Described 5th determines subelement, is also used to pass through following public affairs according to the Five Axis and the 7th coordinate of each point source
Formula three determines the coordinate difference expression formula of each point source:
Formula three: Δ x=(x5,y5,z5)-(x7,y7,z7)
Wherein, (x7,y7,z7) be each point source the 7th coordinate, (x5,y5,z5) be each point source Five Axis,
(x4,y4,z4) be each point source 4-coordinate, M be the transformed representation, Δ x be the coordinate difference expression formula.
10. device according to claim 6, which is characterized in that the multi-modal detection system is positron emission fault
Imaging-computed tomography PET-CT detection system, the first image are the PET image of pet detector detection, described the
Two images are the CT image of CT detector detection, described device further include:
Second obtains module, and for obtaining the second transition matrix according to first transition matrix, second transition matrix is used
It is sat in being converted to coordinate of the second pixel of each of second image in second coordinate system described first
Coordinate in mark system;
Second determining module is determined for the third coordinate according to second transition matrix and each second pixel
8th coordinate of each second pixel, the 8th coordinate of each second pixel are each second pixel
Coordinate in first coordinate system;
Third determining module, for according to the of the 8th coordinate of each second pixel and each first pixel
One coordinate determines the attenuation coefficient of each first pixel from second image;
Correction module is corrected the first image for the attenuation coefficient according to each first pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611239157.5A CN106889999B (en) | 2016-12-28 | 2016-12-28 | The method and apparatus of multi-modal detection system image co-registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611239157.5A CN106889999B (en) | 2016-12-28 | 2016-12-28 | The method and apparatus of multi-modal detection system image co-registration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106889999A CN106889999A (en) | 2017-06-27 |
CN106889999B true CN106889999B (en) | 2019-12-03 |
Family
ID=59198885
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611239157.5A Active CN106889999B (en) | 2016-12-28 | 2016-12-28 | The method and apparatus of multi-modal detection system image co-registration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106889999B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712133B (en) * | 2018-12-28 | 2021-04-20 | 上海联影医疗科技股份有限公司 | Focal localization method, device and magnetic resonance spectroscopy analysis system |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001095109A2 (en) * | 2000-06-02 | 2001-12-13 | Koninklijke Philips Electronics N.V. | Method and apparatus for merging images |
EP1772745B1 (en) * | 2005-10-06 | 2008-08-27 | MedCom Gesellschaft für medizinische Bildverarbeitung mbH | Registering 2D ultrasound image data and 3D image data of an object |
CN102853793B (en) * | 2012-09-27 | 2015-03-25 | 中国科学院高能物理研究所 | Coordinate transformation data processing method and coordinate transformation data processing device |
CN104545964A (en) * | 2013-10-29 | 2015-04-29 | 北京大基康明医疗设备有限公司 | Image correcting method and system |
CN104665857B (en) * | 2013-11-28 | 2019-01-11 | 上海联影医疗科技有限公司 | Multi-mode imaging system method for registering |
CN104840212B (en) * | 2014-02-14 | 2017-10-27 | 上海联影医疗科技有限公司 | The registering test equipment and method of multi-mode imaging system |
-
2016
- 2016-12-28 CN CN201611239157.5A patent/CN106889999B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN106889999A (en) | 2017-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10424118B2 (en) | Perspective representation of a virtual scene component | |
US10331118B2 (en) | Medical device diagnostic apparatus and control method thereof | |
CN107638188A (en) | Image attenuation bearing calibration and device | |
US9665990B2 (en) | Image display to display 3D image and sectional images | |
CN106097437B (en) | Archebiosis light three-D imaging method based on pure optical system | |
CN109452947A (en) | For generating positioning image and the method to patient's imaging, x-ray imaging system | |
CN103460245A (en) | Information processing apparatus | |
CN108697397A (en) | Radial faults imaging system and its control program | |
CN109934220A (en) | A kind of methods of exhibiting, device and the terminal of image point of interest | |
JP2015106262A (en) | Image processing apparatus, medical treatment system, and image processing method | |
CN107610198A (en) | PET image attenuation correction method and device | |
CN109887048B (en) | PET scattering correction method, image reconstruction device and electronic equipment | |
CN107209946A (en) | View data is split and shown | |
CN112401919A (en) | Auxiliary positioning method and system based on positioning model | |
JP2014228443A (en) | Nuclear medicine diagnosis device and nuclear medicine image generation program | |
CN102727237A (en) | Medical image diagnostic apparatus and control method | |
CN114051626A (en) | Joint ambient reconstruction and camera calibration | |
CN106889999B (en) | The method and apparatus of multi-modal detection system image co-registration | |
CN109009204A (en) | For adjusting the method and system of focal position | |
CN110495899A (en) | The method for determining the method and apparatus of geometry calibration and determining associated data | |
CN110063738A (en) | Image processing apparatus, radiation imaging system, radiation wire length image capturing method and recording medium | |
CN107564005A (en) | A kind of displacement detecting method and system | |
JP2016137050A (en) | Medical image processing device, medical image processing method, and medical image diagnostic device | |
JP2020003488A (en) | Medical information processing device | |
CN106548473B (en) | A kind of method and device constructing phase image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |