CN117095137B - Three-dimensional imaging method and system of medical image based on two-way image acquisition - Google Patents
Three-dimensional imaging method and system of medical image based on two-way image acquisition Download PDFInfo
- Publication number
- CN117095137B CN117095137B CN202311365240.7A CN202311365240A CN117095137B CN 117095137 B CN117095137 B CN 117095137B CN 202311365240 A CN202311365240 A CN 202311365240A CN 117095137 B CN117095137 B CN 117095137B
- Authority
- CN
- China
- Prior art keywords
- image
- image data
- dimensional
- target
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 124
- 238000011088 calibration curve Methods 0.000 claims abstract description 77
- 238000011156 evaluation Methods 0.000 claims abstract description 59
- 238000001514 detection method Methods 0.000 claims abstract description 58
- 238000000034 method Methods 0.000 claims abstract description 48
- 238000012545 processing Methods 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 claims description 48
- 238000006073 displacement reaction Methods 0.000 claims description 35
- 238000012937 correction Methods 0.000 claims description 18
- 230000010354 integration Effects 0.000 claims description 17
- 238000009877 rendering Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 9
- 238000003702 image correction Methods 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 11
- 238000002591 computed tomography Methods 0.000 description 9
- 210000004556 brain Anatomy 0.000 description 8
- 230000000747 cardiac effect Effects 0.000 description 7
- 238000002600 positron emission tomography Methods 0.000 description 7
- 239000000700 radioactive tracer Substances 0.000 description 7
- 238000002059 diagnostic imaging Methods 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 238000013507 mapping Methods 0.000 description 6
- 210000003484 anatomy Anatomy 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 210000003205 muscle Anatomy 0.000 description 4
- 210000004204 blood vessel Anatomy 0.000 description 3
- 230000007177 brain activity Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000010521 absorption reaction Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000002685 pulmonary effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of image processing, and discloses a medical image three-dimensional imaging method and system based on double-path image acquisition, which are used for improving the three-dimensional imaging definition and accuracy of the medical image acquired by the double-path image. The method comprises the following steps: collecting first medical image data and second medical image data of a target detection object; calculating a first focused image sequence and a second focused image sequence; calculating a first optimal refocusing coefficient and a second optimal refocusing coefficient through a definition evaluation function; constructing a first depth calibration curve and a second depth calibration curve; determining first spatial location information and second spatial location information; constructing a first target three-dimensional tensor and constructing a second target three-dimensional tensor; and fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
Description
Technical Field
The invention relates to the field of image processing, in particular to a medical image three-dimensional imaging method and system based on two-way image acquisition.
Background
With the continuous development and progress of medical imaging technology, three-dimensional imaging is becoming more and more important in the fields of medical diagnosis, surgical planning, treatment monitoring, and the like. However, conventional medical imaging techniques such as CT (computed tomography) and MRI (magnetic resonance imaging) are limited in the imaging process, such as radiation dose, cost and time.
In conventional three-dimensional imaging methods, complex scanning equipment or complex image processing steps are often required, which in some cases do not provide sufficient information, especially in cases where deep knowledge of the organ structure and abnormalities is required, i.e. the accuracy of existing solutions is low.
Disclosure of Invention
The invention provides a medical image three-dimensional imaging method and system based on double-path image acquisition, which are used for improving the three-dimensional imaging definition and accuracy of the medical image acquired by the double-path image.
The first aspect of the invention provides a three-dimensional imaging method of medical images based on two-way image acquisition, which comprises the following steps:
acquiring two paths of medical image data of a target detection object through two imaging devices with different visual angles respectively to obtain first medical image data and second medical image data;
Respectively calculating a first focused image sequence of the first medical image data and a second focused image sequence of the second medical image data through a preset digital refocusing algorithm;
calculating a first optimal refocusing coefficient of the first focusing image sequence and a second optimal refocusing coefficient of the second focusing image sequence through a preset definition evaluation function;
constructing a first depth calibration curve according to the first optimal refocusing coefficient, and constructing a second depth calibration curve according to the second optimal refocusing coefficient;
determining first spatial position information of a target detection object in the first medical image data according to the first depth calibration curve, and determining second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve;
constructing a first target three-dimensional tensor according to the first spatial position information and the first focusing image sequence, and constructing a second target three-dimensional tensor according to the second spatial position information and the second focusing image sequence;
and fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the calculating, by a preset digital refocusing algorithm, a first focused image sequence of the first medical image data and a second focused image sequence of the second medical image data respectively includes:
performing gray level image conversion on the first medical image data and the second medical image data to obtain first gray level image data and second gray level image data;
performing image correction on the first gray scale image data and the second gray scale image data to obtain first correction image data and second correction image data;
downsampling the first correction image data and the second correction image data for a plurality of times to generate a corresponding first image pyramid and a corresponding second image pyramid;
taking the first image pyramid as first main image data, calculating the displacement between the second image pyramid and the first main image data to obtain first displacement data, and finding an image corresponding to a depth position in the first image pyramid by adopting a pixel interpolation method according to the first displacement data to obtain a first initial image sequence;
Performing depth blur reconstruction and result integration on the first initial image sequence to obtain a first focusing image sequence, wherein the first focusing image sequence comprises a plurality of first refocusing images;
taking the second image pyramid as second main image data, calculating displacement between the first image pyramid and the second main image data to obtain second displacement data, and finding an image corresponding to a depth position in the second image pyramid by adopting a pixel interpolation method according to the second displacement data to obtain a second initial image sequence;
and carrying out depth blur reconstruction and result integration on the second initial image sequence to obtain a second focused image sequence, wherein the second focused image sequence comprises a plurality of second focused images.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, the calculating, by a preset sharpness evaluation function, a first best refocusing coefficient of the first focused image sequence and a second best refocusing coefficient of the second focused image sequence includes:
respectively calculating gradient images of a plurality of first refocusing images in the first focusing image sequence by adopting a second order gradient operator to obtain a first gradient image of each first refocusing image, and respectively calculating gradient images of a plurality of second refocusing images in the second focusing image sequence to obtain a second gradient image of each second refocusing image;
Calculating a first definition evaluation value corresponding to each first gradient image through a preset definition evaluation function, and calculating a second definition evaluation value corresponding to each second gradient image through the definition evaluation function;
and calculating a first optimal refocusing coefficient of the first focused image sequence according to the first definition evaluation value, and calculating a second optimal refocusing coefficient of the second focused image sequence according to the second definition evaluation value.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the constructing a first depth calibration curve according to the first optimal refocusing coefficient and constructing a second depth calibration curve according to the second optimal refocusing coefficient includes:
acquiring a first calibration point of each first gradient image in the first focused image sequence, and acquiring a second calibration point of each second gradient image in the second focused image sequence;
determining a first actual depth position of the first calibration point and a second actual depth position of the second calibration point;
constructing a first depth calibration data set according to the first actual depth position and the first optimal refocusing coefficient, and constructing a second depth calibration data set according to the second actual depth position and the second optimal refocusing coefficient;
And performing curve fitting on the first depth calibration data set to obtain a first depth calibration curve, and performing curve fitting on the second depth calibration data set to obtain a second depth calibration curve.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the determining, according to the first depth calibration curve, first spatial position information of the target detection object in the first medical image data, and determining, according to the second depth calibration curve, second spatial position information of the target detection object in the second medical image data includes:
performing inverse fitting on the first depth calibration curve, and performing numerical solution through a preset first inverse function to obtain first spatial position information of a target detection object in the first medical image data;
and performing inverse fitting on the second depth calibration curve, and performing numerical solution through a preset second inverse function to obtain second spatial position information of the target detection object in the second medical image data.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the constructing a first target three-dimensional tensor according to the first spatial location information and the first focused image sequence, and constructing a second target three-dimensional tensor according to the second spatial location information and the second focused image sequence includes:
Determining a first slicing order of the first focused image sequence according to the first spatial position information, and determining a second slicing order of the second focused image sequence according to the second spatial position information;
constructing a first initial three-dimensional tensor of the first focused image sequence according to the first slicing order, and constructing a second initial three-dimensional tensor of the second focused image sequence according to the second slicing order;
and respectively inputting the first initial three-dimensional tensor and the second initial three-dimensional tensor into a preset convolution long-short time memory network, extracting the space-time characteristics of the target detection object through the convolution long-short time memory network, and generating a corresponding first target three-dimensional tensor and a corresponding second target three-dimensional tensor.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object, where the method includes:
fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor;
Giving voxel values to the fusion three-dimensional tensor to obtain three-dimensional voxel data;
and carrying out voxel rendering on the three-dimensional voxel data to obtain a target three-dimensional imaging model of the target detection object.
The second aspect of the present invention provides a three-dimensional imaging system for medical images based on two-way image acquisition, the three-dimensional imaging system for medical images based on two-way image acquisition comprising:
the acquisition module is used for acquiring two paths of medical image data of the target detection object through two imaging devices with different visual angles respectively to obtain first medical image data and second medical image data;
the computing module is used for respectively computing a first focusing image sequence of the first medical image data and a second focusing image sequence of the second medical image data through a preset digital refocusing algorithm;
the processing module is used for calculating a first optimal refocusing coefficient of the first focusing image sequence and a second optimal refocusing coefficient of the second focusing image sequence through a preset definition evaluation function;
the calibration module is used for constructing a first depth calibration curve according to the first optimal refocusing coefficient and constructing a second depth calibration curve according to the second optimal refocusing coefficient;
The analysis module is used for determining first spatial position information of the target detection object in the first medical image data according to the first depth calibration curve and determining second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve;
the construction module is used for constructing a first target three-dimensional tensor according to the first space position information and the first focusing image sequence, and constructing a second target three-dimensional tensor according to the second space position information and the second focusing image sequence;
the imaging module is used for fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
A third aspect of the present invention provides a three-dimensional imaging apparatus for medical image based on two-way image acquisition, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the three-dimensional imaging device of the two-way image-based medical image to perform the three-dimensional imaging method of the two-way image-based medical image described above.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described two-way image acquisition based three-dimensional imaging method of medical images.
In the technical scheme provided by the invention, first medical image data and second medical image data of a target detection object are acquired; calculating a first focused image sequence and a second focused image sequence; calculating a first optimal refocusing coefficient and a second optimal refocusing coefficient through a definition evaluation function; constructing a first depth calibration curve and a second depth calibration curve; determining first spatial location information and second spatial location information; constructing a first target three-dimensional tensor and constructing a second target three-dimensional tensor; the method and the device can provide more comprehensive and accurate target object structure information, can present anatomical structures of the target object at different angles and depths through multi-view image acquisition, and can more comprehensively understand the morphology and the characteristics of the target object. The static three-dimensional imaging is provided, and a space-time corresponding relation can be established by introducing a convolution long-short time memory network, so that dynamic three-dimensional imaging is realized, and further, the three-dimensional imaging definition and accuracy of medical images acquired by two paths of images are improved.
Drawings
FIG. 1 is a schematic diagram of one embodiment of a three-dimensional imaging method based on two-way image acquisition of medical images in an embodiment of the present invention;
FIG. 2 is a flowchart of calculating a first optimal refocusing coefficient and a second optimal refocusing coefficient according to an embodiment of the present invention;
FIG. 3 is a flow chart of constructing a first depth calibration curve and a second depth calibration curve according to an embodiment of the present invention;
FIG. 4 is a flowchart of determining first spatial location information and second spatial location information according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of one embodiment of a three-dimensional imaging system based on two-way image acquisition medical images in an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a three-dimensional imaging apparatus for medical image based on two-way image acquisition in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a three-dimensional imaging method and a three-dimensional imaging system for medical images based on two-way image acquisition, which are used for improving the three-dimensional imaging definition and the accuracy of the medical images acquired by the two-way images. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and an embodiment of a three-dimensional imaging method for medical images based on two-way image acquisition in the embodiment of the present invention includes:
s101, respectively acquiring two paths of medical image data of a target detection object through two imaging devices with different visual angles to obtain first medical image data and second medical image data;
it will be appreciated that the execution subject of the present invention may be a three-dimensional imaging system based on a medical image acquired by two-way image, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, the server selects two imaging devices of different perspectives. These devices may be medical imaging devices such as CT scanners, MRI machines or X-ray machines, the specific choice depending on the target object to be imaged and the application scenario. For example, if the server were to perform three-dimensional imaging of the brain, two MRI machines of different perspectives could be selected. The two imaging devices need to be properly mounted in different positions to ensure that they can capture different perspectives of the same target object at the same time. The positioning of the devices requires a certain space planning and calibration effort to ensure that their viewing angles do not overlap or misalign. In order to obtain accurate medical image data, the server ensures that the data acquisition of the two devices is synchronized. This may be achieved by using a time synchronization or trigger signal. For example, special hardware devices or software may be used to ensure that two devices begin data collection at the same time to avoid problems of data mismatch or dyssynchrony. Before starting data acquisition, it is necessary to precisely locate the target detection object. This may be achieved by pre-scanning or previewing the image. For example, in performing three-dimensional imaging of the heart, a round of low dose scanning may first be performed to locate the exact position of the heart. Once the target object is accurately positioned, both imaging devices begin to acquire image data at the same time. Such data will include various slice or projection images, depending on the imaging technique used. For example, a CT scan will generate multiple slice images and an MRI will generate images of different sequences. The acquired first medical image data and second medical image data will be stored in a suitable data storage device. These data require subsequent correction and integration to ensure that they can be properly aligned and registered for subsequent processing and analysis. For example, assume that a server is to acquire images of a patient's head using two CT scanners of different perspectives for three-dimensional imaging of the brain. The two CT scanners are mounted in opposite positions to ensure that their view angles do not overlap. The patient is positioned between the scanners and a preliminary scout scan is performed to ensure the correct position of the head. Once the patient is accurately positioned, both CT scanners begin acquiring slice image data of the head at the same time. These image data will include cross-sectional images, each acquired from a different angle. After the data acquisition is completed, the image data are stored and subjected to subsequent image registration and integration to create an accurate three-dimensional imaging model of the brain for medical diagnosis or study.
S102, respectively calculating a first focusing image sequence of the first medical image data and a second focusing image sequence of the second medical image data through a preset digital refocusing algorithm;
specifically, the server performs grayscale image conversion on the first medical image data and the second medical image data. The purpose is to convert the color image into a gray image, which is convenient for subsequent processing and calculation. And performing image correction on the converted gray-scale image. Image correction may include denoising, contrast enhancement, etc. to ensure image quality and reliability. A first image pyramid and a second image pyramid are generated, which is a multi-scale image representation method. The bottom layer of the pyramid contains the original image, while each upper layer is a downsampled version of the lower layer image. This facilitates processing at different scales. Using the image pyramid, a displacement between the first image pyramid and the second main image data and a displacement between the second image pyramid and the first main image data are calculated. These displacement information help align the two sets of images for subsequent processing. And finding out an image corresponding to the depth position in the first image pyramid by using a pixel interpolation method by using the calculated displacement data, and generating a first initial image sequence. Likewise, a second initial image sequence is also generated in the second image pyramid using a pixel interpolation method. These initial image sequences will be used for subsequent depth blur reconstruction and result integration. And carrying out depth blur reconstruction and result integration on the first initial image sequence to obtain a first focused image sequence, wherein the first focused image sequence comprises a plurality of first refocused images. Likewise, a second initial image sequence is subjected to depth blur reconstruction and result integration to obtain a second refocused image sequence, wherein the second refocused image sequence comprises a plurality of second refocused images. For example, assume that a server is using two X-ray machines to acquire chest images of a patient for pulmonary three-dimensional imaging. The image data of the two X-ray machines are converted into grayscale images and image correction is performed to improve quality. The server generates an image pyramid for both sets of images, representing the same image at multiple scales. By calculating the displacement data, the server finds the displacement between the two sets of images, ensuring that they are aligned. The server uses a pixel interpolation method to find an image corresponding to the depth position in the first image pyramid, and generates a first initial image sequence. Likewise, a second initial image sequence is also generated in the second image pyramid. Through depth blur reconstruction and result integration, the server obtains a first focused image sequence and a second focused image sequence. These sequences contain a plurality of images, each representing a focused image at a particular depth location, forming a three-dimensional imaging model.
S103, calculating a first optimal refocusing coefficient of the first focusing image sequence and a second optimal refocusing coefficient of the second focusing image sequence through a preset definition evaluation function;
it should be noted that, for a plurality of first refocused images in the first focused image sequence, a second order gradient operator is used to calculate a gradient image for each image, which can help capture edge and detail information for different portions of the image. Likewise, for a plurality of second refocused images in the second sequence of refocused images, corresponding gradient images are also calculated. And respectively calculating a first definition evaluation value corresponding to each first gradient image and a second definition evaluation value corresponding to each second gradient image through a preset definition evaluation function. This evaluation function may measure the sharpness of the image, typically based on factors such as gradients, contrast, details of the image, etc. And calculating a first optimal refocusing coefficient of the first focusing image sequence according to the calculated first definition evaluation value. Similarly, a second best refocusing coefficient of the second sequence of focused images is calculated based on the second sharpness evaluation value. These optimal refocusing coefficients will be used for subsequent depth calibration and three-dimensional imaging. For example, assume that a server acquires brain images of a patient using two MRI devices for brain three-dimensional imaging. In case a first and a second sequence of focused images have been obtained, for each first refocused image in the first sequence of focused images, its gradient image is calculated using a second order gradient operator. This captures edge and texture information of the image. The server calculates a first definition evaluation value of each first gradient image through a preset definition evaluation function. This involves computing features in terms of contrast, detail, and gradients of the image. Similarly, for each second refocus image in the second sequence of refocus images, a similar calculation is performed to obtain a second sharpness evaluation value for each second gradient image. And according to the calculated first definition evaluation value, the server calculates a first optimal refocusing coefficient of the first focusing image sequence. Also, a second best refocusing coefficient of the second sequence of focused images is calculated based on the second sharpness evaluation value.
S104, constructing a first depth calibration curve according to the first optimal refocusing coefficient, and constructing a second depth calibration curve according to the second optimal refocusing coefficient;
specifically, a first calibration point is acquired from each first gradient image in the first sequence of focused images. Similarly, a second calibration point is acquired from each second gradient image in the second sequence of focused images. These calibration points will be used as references in subsequent depth calibration. For each calibration point, its corresponding actual depth position is determined. This requires the use of other measurement methods or known depth information to establish a mapping between the calibration points and the depth positions. A first depth calibration data set is constructed based on the first actual depth position and the first optimal refocusing coefficient. Likewise, a second depth calibration data set is constructed based on the second actual depth position and a second best refocusing coefficient. These datasets will be used for subsequent curve fitting. And performing curve fitting on the first depth calibration data set to obtain a first depth calibration curve. Also, a curve fitting is performed on the second depth calibration data set to obtain a second depth calibration curve. The goal of curve fitting is to find a suitable mathematical model to best describe the relationship between the calibration point and the actual depth position. For example, assume that a server is acquiring cardiac images of a patient using two ultrasound devices for cardiac three-dimensional imaging. In case the first and second sequences of focused images and their corresponding optimal refocusing coefficients have been obtained, a first calibration point is acquired from each first gradient image in the first sequence of focused images, while a second calibration point is acquired from each second gradient image in the second sequence of focused images. The actual heart depth position corresponding to these calibration points is determined by known cardiac anatomy or other measurement methods. A first cardiac depth calibration data set is constructed based on the first actual cardiac depth position and the first optimal refocusing coefficient. Likewise, a second cardiac depth calibration data set is constructed based on the second actual cardiac depth position and a second optimal refocusing coefficient. And performing curve fitting on the first heart depth calibration data set to obtain a first heart depth calibration curve. And performing curve fitting on the second heart depth calibration data set to obtain a second heart depth calibration curve. These curves will be used for subsequent three-dimensional imaging, helping to translate the depth information of the image into the actual heart depth position.
S105, determining first spatial position information of a target detection object in the first medical image data according to the first depth calibration curve, and determining second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve;
specifically, a reverse fit is performed to the first depth calibration curve. Inverse fitting refers to the process of back-deriving the input values (image parameters) from the output values (depth positions) of the curve. This can be achieved by mathematical modeling and an inverse function. Similarly, a reverse fit is also performed on the second depth calibration curve. And carrying out numerical solution on the inverse fitted first depth calibration curve through a preset first inverse function to obtain first spatial position information of the target detection object in the first medical image data. And carrying out numerical solution on the inverse fitted second depth calibration curve by using a preset second inverse function to obtain second spatial position information of the target detection object in the second medical image data. Once the numerical solution is complete, spatial location information of the target detection object in the first and second medical image data is obtained. Such information may include three-dimensional coordinates of the object, size, orientation, etc., depending on the design of the inverse function and the accuracy of the calibration curve. For example, assume that a server is using two CT scanners to acquire images of a patient's head for three-dimensional imaging of the head. The inverse fitting of the first and second depth calibration curves has been completed with a corresponding inverse function. And for the first depth calibration curve, the server performs inverse fitting, and inversely calculating image parameters from the depth position of the image. This involves mapping of depth index points and image parameters for different parts of the head. The first inverse function is used for numerical solution, and the output value (depth position) of the inverse fitted first depth calibration curve is mapped to spatial position information, such as three-dimensional coordinates, of the target detection object in the first medical image data. The same procedure applies to the second depth calibration curve. The server maps the output value of the second depth calibration curve to spatial location information of the target detection object in the second medical image data by inverse fitting and numerical solution. Finally, the server obtains spatial position information of the target detection object in the first and second medical image data, which will be used for subsequent three-dimensional imaging and analysis.
S106, constructing a first target three-dimensional tensor according to the first spatial position information and the first focusing image sequence, and constructing a second target three-dimensional tensor according to the second spatial position information and the second focusing image sequence;
specifically, a slice order of the first focused image sequence is determined according to the first spatial position information. Similarly, a slice order of the second sequence of focused images is determined based on the second spatial position information. This slicing order will ensure that the slices are arranged in the correct order when constructing the target three-dimensional tensor. A first initial three-dimensional tensor of the first sequence of focused images is constructed based on the determined slice order. Likewise, a second initial three-dimensional tensor of the second sequence of focused images is constructed according to the second slice order. These initial three-dimensional tensors will contain spatial information of the slice images, ready for further feature extraction. And respectively inputting the first initial three-dimensional tensor and the second initial three-dimensional tensor into a preset convolution long-short-time memory network. ConvLSTM is a neural network structure combining a convolutional neural network and a long-short memory network, and is suitable for processing time-space sequence data. By ConvLSTM, the spatiotemporal features of the target detection object can be extracted in the three-dimensional tensor. And extracting features from the first initial three-dimensional tensor through ConvLSTM to generate a corresponding first target three-dimensional tensor. Similarly, features are extracted from the second initial three-dimensional tensor, generating a corresponding second target three-dimensional tensor. These target three-dimensional tensors will contain the spatiotemporal characteristic information of the target test object. For example, assume that a server is acquiring muscle images of a patient using two MRI devices for three-dimensional imaging of the muscle. In case the first and second focused image sequences, the first spatial position information and the second spatial position information have been obtained, a slice order of the first focused image sequence is determined from the first spatial position information. Similarly, a slice order of the second sequence of focused images is determined based on the second spatial position information. A first initial three-dimensional tensor of the first sequence of focused images is constructed according to the determined slice order. Likewise, a second initial three-dimensional tensor of the second sequence of focused images is constructed according to the second slice order. The first initial three-dimensional tensor and the second initial three-dimensional tensor are input into a preset convolution long-short-time memory network (ConvLSTM). And extracting space-time characteristics from the image sequence through ConvLSTM, and generating a first target three-dimensional tensor and a second target three-dimensional tensor. Ultimately, these target three-dimensional tensors will contain the spatiotemporal features of the muscle image, enabling better presentation of the morphology and structure of the muscle, providing valuable information for medical diagnosis and research.
S107, fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
Specifically, the server fuses the first target three-dimensional tensor and the second target three-dimensional tensor. This may be achieved by means of weighted averaging, fusion algorithms (e.g. tensor stitching or feature fusion networks), etc. The purpose of the fusion is to combine the information from the two sources to obtain a more comprehensive and accurate target object characteristic. Voxel values are assigned to the fused three-dimensional tensors, i.e., each voxel (voxel) is associated with a specific numerical value. These values may represent different properties such as density, color or gray values. This step provides specific information for each voxel, and pads for subsequent three-dimensional rendering. And three-dimensional imaging is carried out on the fused three-dimensional tensor by using a voxel rendering technology. Voxel rendering is a method of mapping three-dimensional data to a two-dimensional screen, and three-dimensional images with realistic effects can be generated by ray tracing and voxel value interpolation. In the rendering process, the color and transparency of each pixel are determined according to the numerical value and the position information of the voxels, so that a three-dimensional imaging model of the target object is generated. For example, assume that a server is using two X-ray CT scanners to acquire bone images of a patient for three-dimensional imaging of bone. The first and second target three-dimensional tensors are fused, where they have been obtained. This may be achieved by fusing the values of each voxel by calculating a weighted average, resulting in a fused three-dimensional tensor. Voxel values are given to the fused three-dimensional tensors, and each voxel is associated with an appropriate numerical value to represent the density of different bone tissues. And three-dimensional imaging is carried out on the fused three-dimensional tensor by using a voxel rendering technology. The color and transparency of each pixel are determined by tracking rays and interpolation, and a bone three-dimensional imaging model with realistic effects is generated.
In the embodiment of the invention, first medical image data and second medical image data of a target detection object are acquired; calculating a first focused image sequence and a second focused image sequence; calculating a first optimal refocusing coefficient and a second optimal refocusing coefficient through a definition evaluation function; constructing a first depth calibration curve and a second depth calibration curve; determining first spatial location information and second spatial location information; constructing a first target three-dimensional tensor and constructing a second target three-dimensional tensor; the method and the device can provide more comprehensive and accurate target object structure information, can present anatomical structures of the target object at different angles and depths through multi-view image acquisition, and can more comprehensively understand the morphology and the characteristics of the target object. The static three-dimensional imaging is provided, and a space-time corresponding relation can be established by introducing a convolution long-short time memory network, so that dynamic three-dimensional imaging is realized, and further, the three-dimensional imaging definition and accuracy of medical images acquired by two paths of images are improved.
In a specific embodiment, the process of executing step S102 may specifically include the following steps:
(1) Performing gray level image conversion on the first medical image data and the second medical image data to obtain first gray level image data and second gray level image data;
(2) Performing image correction on the first gray scale image data and the second gray scale image data to obtain first correction image data and second correction image data;
(3) Downsampling the first correction image data and the second correction image data for a plurality of times to generate a corresponding first image pyramid and a corresponding second image pyramid;
(4) Taking the first image pyramid as first main image data, calculating the displacement between the second image pyramid and the first main image data to obtain first displacement data, and finding an image corresponding to the depth position in the first image pyramid by adopting a pixel interpolation method according to the first displacement data to obtain a first initial image sequence;
(5) Performing depth blur reconstruction and result integration on the first initial image sequence to obtain a first focusing image sequence, wherein the first focusing image sequence comprises a plurality of first refocusing images;
(6) Taking the second image pyramid as second main image data, calculating displacement between the first image pyramid and the second main image data to obtain second displacement data, and finding an image at a corresponding depth position in the second image pyramid by adopting a pixel interpolation method according to the second displacement data to obtain a second initial image sequence;
(7) And carrying out depth blur reconstruction and result integration on the second initial image sequence to obtain a second focused image sequence, wherein the second focused image sequence comprises a plurality of second focused images.
Specifically, the server extracts luminance information from the first medical image data and the second medical image data, and converts the luminance information into grayscale image data. The purpose is to eliminate the color information, making the subsequent processing more accurate and simplified. Image correction is performed on the first and second grayscale image data. Image correction involves removing artifacts, reducing noise, enhancing contrast, etc. to ensure consistency and comparability of image quality. And performing repeated downsampling on the first and second correction image data to generate a corresponding first image pyramid and second image pyramid. A pyramid is an image data structure containing multiple resolution levels, making image processing and feature extraction at different resolutions more flexible. The first image pyramid is taken as first main image data, and the displacement between the second image pyramid and the first main image data is calculated. These displacement information describe the offset of the second image relative to the first main image and can be used for subsequent pixel interpolation and depth reconstruction. Based on displacement data, a pixel interpolation method is adopted to find an image corresponding to the depth position in a first image pyramid, and a first initial image sequence is obtained. And finding out an image corresponding to the depth position in the second image pyramid according to the second displacement data, and obtaining a second initial image sequence. And carrying out depth blur reconstruction and result integration on the first initial image sequence and the second initial image sequence. Depth blur reconstruction is to focus together image information at different depth positions, generating a series of images with different focus points. The result integration stage integrates these reconstructed images into a first and a second sequence of focused images, which sequences contain images of a plurality of different focuses. For example, assuming that the server is performing fundus blood vessel imaging, two sets of fundus image data are acquired using two different view angle imaging devices. These images are used to detect the three-dimensional structure of the fundus blood vessel. Luminance information is extracted from the two sets of image data and converted into grayscale image data. The gray scale image is corrected to remove artifacts and noise. And carrying out multiple downsampling on the corrected image to generate a pyramid. By calculating the displacement, the server knows the offset of the second image relative to the first main image. And using a pixel interpolation method to find images corresponding to the depth positions in the pyramid, and obtaining a first initial image sequence and a second initial image sequence. And carrying out depth blur reconstruction and result integration on the initial image sequences to obtain a first focused image sequence and a second focused image sequence. These focused image sequences contain fundus blood vessel images of different focuses and can be used for further three-dimensional imaging and vascular structure analysis.
In a specific embodiment, as shown in fig. 2, the process of performing step S103 may specifically include the following steps:
s201, respectively calculating gradient images of a plurality of first refocusing images in a first focusing image sequence by adopting a second-order gradient operator to obtain a first gradient image of each first refocusing image, and respectively calculating gradient images of a plurality of second refocusing images in a second focusing image sequence to obtain a second gradient image of each second refocusing image;
s202, calculating a first definition evaluation value corresponding to each first gradient image through a preset definition evaluation function, and calculating a second definition evaluation value corresponding to each second gradient image through the definition evaluation function;
s203, calculating a first optimal refocusing coefficient of the first focused image sequence according to the first definition evaluation value, and calculating a second optimal refocusing coefficient of the second focused image sequence according to the second definition evaluation value.
In particular, in medical image processing, gradient images are often used to capture edges and features in images. Second order gradient operators such as Sobel or Scharr are commonly used to compute gradient images. These operators calculate the rate of change of pixel values in different directions, thereby obtaining gradient information in the image. For each first refocused image in the first sequence of focused images, applying the selected gradient operator, resulting in a first gradient image for each image. Likewise, for each second refocused image in the second sequence of refocused images, a second gradient image is also calculated. The sharpness evaluation function is used to measure sharpness and quality of the image. Common sharpness evaluation functions include a Structural Similarity Index (SSIM) and a gradient magnitude similarity index (Gradient Magnitude Similarity Index, GMSI). These evaluation functions can be used to quantify the sharpness of the image, typically returning a score between 0 and 1, where 1 represents the best sharpness. For each first gradient image, a first sharpness evaluation value is calculated. Likewise, for each second gradient image, a second sharpness evaluation value is calculated. These evaluation values will reflect the sharpness of the image at different depth positions. And calculating a first optimal refocusing coefficient of the first focusing image sequence according to the first definition evaluation value. Similarly, a second best refocusing coefficient of the second sequence of focused images is calculated based on the second sharpness evaluation value. These optimal refocusing coefficients will help determine which images should be used for optimal sharpness refocusing at different depth positions. For example, assume that a server is processing a series of fundus images to detect retinopathy. These images contain different depth locations of the retina, such as the surface and depth of the retina. The server applies a Sobel gradient operator to the image for each depth position to calculate a gradient image, capturing edges and features in the image. The server calculates a sharpness evaluation value for each gradient image using a sharpness evaluation function, e.g., GMSI. These evaluation values will tell the server which depth-position images are the sharpest. By comparing the sharpness evaluation values, the server calculates an optimal refocusing coefficient. For example, if the image located in the deep layer scores higher in the sharpness evaluation, the server may choose to refocus more in the deep layer to obtain the best sharpness.
In a specific embodiment, as shown in fig. 3, the process of executing step S104 may specifically include the following steps:
s301, acquiring a first calibration point of each first gradient image in a first focusing image sequence, and acquiring a second calibration point of each second gradient image in a second focusing image sequence;
s302, determining a first actual depth position of a first calibration point and a second actual depth position of a second calibration point;
s303, constructing a first depth calibration data set according to the first actual depth position and the first optimal refocusing coefficient, and constructing a second depth calibration data set according to the second actual depth position and the second optimal refocusing coefficient;
s304, performing curve fitting on the first depth calibration data set to obtain a first depth calibration curve, and performing curve fitting on the second depth calibration data set to obtain a second depth calibration curve.
In particular, the server needs to extract the calibration points from the images for each first gradient image in the first sequence of focused images. These index points may be salient features of the lesion area, such as edges or corner points. Likewise, for each second gradient image in the second sequence of focused images, a respective second calibration point is also extracted. The first actual depth position of the first calibration point and the second actual depth position of the second calibration point can be determined by a pre-known correspondence of the image and the actual depth positions. This involves calibration and measurement to relate features in the image to actual depth positions. A first depth calibration data set is constructed based on the first actual depth position and the first optimal refocusing coefficient. The first actual depth position and refocusing coefficient are taken as inputs and a data set is created containing a plurality of data points, each data point including an actual depth position and a corresponding refocusing coefficient. Likewise, a second depth calibration data set is constructed based on the second actual depth position and a second best refocusing coefficient. And performing curve fitting on the first depth calibration data set to obtain a first depth calibration curve. The curve may be a polynomial, gaussian process, etc., depending on the nature and reality of the data. Likewise, a curve fitting is performed on the second depth calibration data set to obtain a second depth calibration curve. For example, consider a medical imaging system for fundus images. In this system, the server wants to determine the depth position of a specific feature (such as a lesion area) in the image. The server extracts a first calibration point of the feature from the first sequence of focused images while extracting a second calibration point of the feature from the second sequence of focused images. By measuring and calibrating these calibration points, the server determines the actual depth position corresponding to the first calibration point and the actual depth position corresponding to the second calibration point. The server constructs a first depth calibration data set based on the first optimal refocusing coefficient and the first actual depth position. Similarly, a second depth calibration data set is constructed from the second best refocusing coefficient and the second actual depth position. The server performs curve fitting on the data sets to obtain a first depth calibration curve and a second depth calibration curve. These curves may help the server map feature locations in the image to actual depth locations, thereby enabling three-dimensional localization and imaging in the medical image.
In a specific embodiment, as shown in fig. 4, the process of performing step S105 may specifically include the following steps:
s401, performing inverse fitting on a first depth calibration curve, and performing numerical solution through a preset first inverse function to obtain first spatial position information of a target detection object in first medical image data;
s402, performing inverse fitting on the second depth calibration curve, and performing numerical solution through a preset second inverse function to obtain second spatial position information of the target detection object in the second medical image data.
Specifically, the server inverse fitting refers to fitting a curve by a known function and then calculating an input value by an inverse function of the curve. The server fits the relationship between the image depth position and the actual depth position by means of a known depth calibration curve. And calculating the spatial position of the target detection object in the image through an inverse function. The first depth calibration curve is inverse fitted, using an inverse function to map the actual depth position back to the depth position in the image. This inverse function may be a predefined mathematical function, such as an exponential function, a logarithmic function, etc. The server converts the depth position in the image to the actual depth position by an inverse function. Likewise, a reverse fit is performed to the second depth calibration curve. The actual depth position is mapped back to the depth position in the image by a predefined inverse function. For example, consider a medical imaging system for performing the processing of radiomedical images. Assuming the server has a set of Positron Emission Tomography (PET) images, the server locates the distribution of the radiotracer within the body through these images. The server adopts a corresponding calibration method to the PET image to construct the relation between the distribution of each radioactive tracer and the actual depth position, namely a first depth calibration curve and a second depth calibration curve. The server performs an inverse fit to the curves, mapping the depth positions in the image back to the actual depth positions using a predefined inverse function. Through this process, the server obtains the actual depth position corresponding to the distribution of the radiotracer in each image. For example, assume that the server observes an exponential relationship between the peak position and depth of the radiotracer in a series of PET images. The server uses the inverse of the exponential function to map the peak locations in the image back to the actual depth locations. Therefore, the server can obtain the distribution condition of the radioactive tracer in different depth positions, so that the spatial position information of the target object in the medical image is realized.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Determining a first slicing order of the first focused image sequence according to the first spatial position information, and determining a second slicing order of the second focused image sequence according to the second spatial position information;
(2) Constructing a first initial three-dimensional tensor of a first focused image sequence according to a first slicing order, and constructing a second initial three-dimensional tensor of a second focused image sequence according to a second slicing order;
(3) And respectively inputting the first initial three-dimensional tensor and the second initial three-dimensional tensor into a preset convolution long-short time memory network, extracting the space-time characteristics of the target detection object through the convolution long-short time memory network, and generating a corresponding first target three-dimensional tensor and a corresponding second target three-dimensional tensor.
Specifically, a slice order of the first focused image sequence is determined according to the first spatial position information. This arranges the slices in the image sequence in a particular order for subsequent three-dimensional tensor construction. Likewise, a slice order of the second sequence of focused images is determined based on the second spatial position information. According to the determined slice order, the slices of the first sequence of focused images are sequentially stacked to construct a first initial three-dimensional tensor of the first sequence of focused images. Similarly, the slices of the second sequence of focused images are stacked in slice order to construct a second initial three-dimensional tensor of the second sequence of focused images. The first initial three-dimensional tensor and the second initial three-dimensional tensor are respectively input into a preset convolution long-short time memory network (ConvLSTM). ConvLSTM is a network structure combining Convolutional Neural Networks (CNNs) and long-short-term memory networks (LSTMs) and is specifically used for processing time-space sequence data. Through ConvLSTM, the network can learn to extract spatio-temporal features, capturing changes and associations in the image sequence. And extracting space-time characteristics from the first initial three-dimensional tensor through the processing of the ConvLSTM network, and generating a corresponding first target three-dimensional tensor. Similarly, spatio-temporal features are extracted from the second initial three-dimensional tensor, generating a second target three-dimensional tensor. These target three-dimensional tensors contain the spatio-temporal characteristics of the target detection object in the image sequence, providing valuable information for subsequent fusion and three-dimensional imaging. For example, consider an MRI medical imaging system for observing changes in the activity of a patient's brain. The server builds a three-dimensional tensor based on the MRI image sequence and extracts spatiotemporal features of brain activity in the image sequence. The server determines first spatial location information, which is coordinates of a specific brain region. From this information, the slice order of the MRI image sequence is determined to capture the changes in brain regions. The slices are stacked in order, building a first initial three-dimensional tensor. Likewise, a slice order of the second sequence of focused images is determined based on the second spatial position information and a second initial three-dimensional tensor is constructed. The server inputs the initial three-dimensional tensors into a ConvLSTM network, and the space-time characteristics of the brain activity are obtained through learning and extraction of the network. Finally, features obtained from the ConvLSTM network generate target three-dimensional tensors that will contain spatiotemporal information of brain activity in the MRI image sequence, providing a basis for further analysis and imaging.
In a specific embodiment, the process of executing step S107 may specifically include the following steps:
(1) Fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor;
(2) Giving voxel values to the fusion three-dimensional tensor to obtain three-dimensional voxel data;
(3) And carrying out voxel rendering on the three-dimensional voxel data to obtain a target three-dimensional imaging model of the target detection object.
In particular, the target three-dimensional tensor is a spatio-temporal feature extracted from medical images acquired from different perspectives. Fusing these target three-dimensional tensors can enhance the information content and accuracy of the image. Common fusion methods include element-by-element addition, weighted fusion, and the like. In the fusion process, the corresponding relation of the size and the coordinates among different tensors needs to be considered so as to ensure that the fused data keep consistent. The fused three-dimensional tensor is converted into three-dimensional voxel data, i.e. a numerical value is allocated to each position in three-dimensional space. These values may represent information such as the density, absorption coefficient, etc. of the target object, thereby forming a density distribution model in three-dimensional space. The voxel values may be assigned according to the characteristics and medical knowledge of the target object, or may be obtained by image segmentation or the like. Voxel rendering is the conversion of three-dimensional voxel data into a visualized image to display a three-dimensional imaging model of a target object. Common voxel rendering methods include voxel ray-casting, isosurface extraction, and the like. In the voxel rendering process, parameters such as transparency, color mapping and the like can be adjusted so as to obtain a clearer and vivid imaging effect. For example, consider the scenario of a radiomedical image (PET), where a server generates a three-dimensional imaging model of a radiotracer within a patient from PET images from different perspectives. The server acquires two target three-dimensional tensors through the steps, and the two target three-dimensional tensors respectively represent the space-time characteristics of PET images with different visual angles. And adding the two tensors element by element according to the corresponding relation of the coordinates to realize fusion. And converting the fused three-dimensional tensor into three-dimensional voxel data. In this embodiment, the server assigns a density value to the voxel based on information such as the absorption coefficient of the PET image. And performing voxel rendering, and converting the three-dimensional voxel data into a visualized three-dimensional imaging model. By setting parameters such as transparency, color mapping and the like, the server clearly sees the distribution condition of the radioactive tracer in the patient in the image, thereby realizing a three-dimensional imaging model of the target detection object.
The three-dimensional imaging method of the medical image based on the two-way image acquisition in the embodiment of the present invention is described above, and the three-dimensional imaging system of the medical image based on the two-way image acquisition in the embodiment of the present invention is described below, referring to fig. 5, one embodiment of the three-dimensional imaging system of the medical image based on the two-way image acquisition in the embodiment of the present invention includes:
the acquisition module 501 is configured to acquire two paths of medical image data of a target detection object through two imaging devices with different viewing angles, so as to obtain first medical image data and second medical image data;
a calculating module 502, configured to calculate a first focused image sequence of the first medical image data and a second focused image sequence of the second medical image data by using a preset digital refocusing algorithm, respectively;
a processing module 503, configured to calculate a first best refocusing coefficient of the first focused image sequence and a second best refocusing coefficient of the second focused image sequence according to a preset sharpness evaluation function;
the calibration module 504 is configured to construct a first depth calibration curve according to the first optimal refocusing coefficient, and construct a second depth calibration curve according to the second optimal refocusing coefficient;
An analysis module 505, configured to determine first spatial position information of a target detection object in the first medical image data according to the first depth calibration curve, and determine second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve;
a construction module 506, configured to construct a first target three-dimensional tensor according to the first spatial location information and the first focused image sequence, and construct a second target three-dimensional tensor according to the second spatial location information and the second focused image sequence;
the imaging module 507 is configured to fuse the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and perform three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
Collecting first medical image data and second medical image data of a target detection object through the cooperative cooperation of the components; calculating a first focused image sequence and a second focused image sequence; calculating a first optimal refocusing coefficient and a second optimal refocusing coefficient through a definition evaluation function; constructing a first depth calibration curve and a second depth calibration curve; determining first spatial location information and second spatial location information; constructing a first target three-dimensional tensor and constructing a second target three-dimensional tensor; the method and the device can provide more comprehensive and accurate target object structure information, can present anatomical structures of the target object at different angles and depths through multi-view image acquisition, and can more comprehensively understand the morphology and the characteristics of the target object. The static three-dimensional imaging is provided, and a space-time corresponding relation can be established by introducing a convolution long-short time memory network, so that dynamic three-dimensional imaging is realized, and further, the three-dimensional imaging definition and accuracy of medical images acquired by two paths of images are improved.
The three-dimensional imaging system based on the medical image acquired by the two-way image in the embodiment of the present invention is described in detail from the perspective of the modularized functional entity in fig. 5 above, and the three-dimensional imaging device based on the medical image acquired by the two-way image in the embodiment of the present invention is described in detail from the perspective of hardware processing below.
Fig. 6 is a schematic structural diagram of a three-dimensional imaging device based on two-way image acquired medical images according to an embodiment of the present invention, where the three-dimensional imaging device 600 based on two-way image acquired medical images may have relatively large differences due to different configurations or performances, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, one or more storage media 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the three-dimensional imaging device 600 for medical images based on two-way image acquisition. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the three-dimensional imaging device 600 based on the two-way image-acquired medical image.
The two-way image acquisition based medical image three-dimensional imaging device 600 may also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the three-dimensional imaging device structure of the two-way image-based medical image shown in fig. 6 does not constitute a limitation of the three-dimensional imaging device of the two-way image-based medical image, and may include more or less components than illustrated, or may combine certain components, or a different arrangement of components.
The invention also provides a three-dimensional imaging device based on the medical image acquired by the two-way image, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the three-dimensional imaging method based on the medical image acquired by the two-way image in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the three-dimensional imaging method for medical images based on two-way image acquisition.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (8)
1. The three-dimensional imaging method of the medical image based on the double-path image acquisition is characterized by comprising the following steps of:
acquiring two paths of medical image data of a target detection object through two imaging devices with different visual angles respectively to obtain first medical image data and second medical image data;
respectively calculating a first focused image sequence of the first medical image data and a second focused image sequence of the second medical image data through a preset digital refocusing algorithm; the method specifically comprises the following steps: performing gray level image conversion on the first medical image data and the second medical image data to obtain first gray level image data and second gray level image data; performing image correction on the first gray scale image data and the second gray scale image data to obtain first correction image data and second correction image data; downsampling the first correction image data and the second correction image data for a plurality of times to generate a corresponding first image pyramid and a corresponding second image pyramid; taking the first image pyramid as first main image data, calculating the displacement between the second image pyramid and the first main image data to obtain first displacement data, and finding an image corresponding to a depth position in the first image pyramid by adopting a pixel interpolation method according to the first displacement data to obtain a first initial image sequence; performing depth blur reconstruction and result integration on the first initial image sequence to obtain a first focusing image sequence, wherein the first focusing image sequence comprises a plurality of first refocusing images; taking the second image pyramid as second main image data, calculating displacement between the first image pyramid and the second main image data to obtain second displacement data, and finding an image corresponding to a depth position in the second image pyramid by adopting a pixel interpolation method according to the second displacement data to obtain a second initial image sequence; performing depth blur reconstruction and result integration on the second initial image sequence to obtain a second focused image sequence, wherein the second focused image sequence comprises a plurality of second focused images;
Calculating a first optimal refocusing coefficient of the first focusing image sequence and a second optimal refocusing coefficient of the second focusing image sequence through a preset definition evaluation function; the method specifically comprises the following steps: respectively calculating gradient images of a plurality of first refocusing images in the first focusing image sequence by adopting a second order gradient operator to obtain a first gradient image of each first refocusing image, and respectively calculating gradient images of a plurality of second refocusing images in the second focusing image sequence to obtain a second gradient image of each second refocusing image; calculating a first definition evaluation value corresponding to each first gradient image through a preset definition evaluation function, and calculating a second definition evaluation value corresponding to each second gradient image through the definition evaluation function; calculating a first optimal refocusing coefficient of the first focused image sequence according to the first definition evaluation value, and calculating a second optimal refocusing coefficient of the second focused image sequence according to the second definition evaluation value;
constructing a first depth calibration curve according to the first optimal refocusing coefficient, and constructing a second depth calibration curve according to the second optimal refocusing coefficient;
Determining first spatial position information of a target detection object in the first medical image data according to the first depth calibration curve, and determining second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve;
constructing a first target three-dimensional tensor according to the first spatial position information and the first focusing image sequence, and constructing a second target three-dimensional tensor according to the second spatial position information and the second focusing image sequence;
and fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
2. The two-way image acquisition based medical image three-dimensional imaging method according to claim 1, wherein the constructing a first depth calibration curve from the first optimal refocusing coefficient and a second depth calibration curve from the second optimal refocusing coefficient comprises:
acquiring a first calibration point of each first gradient image in the first focused image sequence, and acquiring a second calibration point of each second gradient image in the second focused image sequence;
Determining a first actual depth position of the first calibration point and a second actual depth position of the second calibration point;
constructing a first depth calibration data set according to the first actual depth position and the first optimal refocusing coefficient, and constructing a second depth calibration data set according to the second actual depth position and the second optimal refocusing coefficient;
and performing curve fitting on the first depth calibration data set to obtain a first depth calibration curve, and performing curve fitting on the second depth calibration data set to obtain a second depth calibration curve.
3. The two-way image acquisition-based medical image three-dimensional imaging method according to claim 2, wherein the determining the first spatial position information of the target detection object in the first medical image data according to the first depth calibration curve and the determining the second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve includes:
performing inverse fitting on the first depth calibration curve, and performing numerical solution through a preset first inverse function to obtain first spatial position information of a target detection object in the first medical image data;
And performing inverse fitting on the second depth calibration curve, and performing numerical solution through a preset second inverse function to obtain second spatial position information of the target detection object in the second medical image data.
4. The two-way image acquisition based medical image three-dimensional imaging method according to claim 3, wherein the constructing a first target three-dimensional tensor from the first spatial position information and the first focused image sequence and constructing a second target three-dimensional tensor from the second spatial position information and the second focused image sequence comprises:
determining a first slicing order of the first focused image sequence according to the first spatial position information, and determining a second slicing order of the second focused image sequence according to the second spatial position information;
constructing a first initial three-dimensional tensor of the first focused image sequence according to the first slicing order, and constructing a second initial three-dimensional tensor of the second focused image sequence according to the second slicing order;
and respectively inputting the first initial three-dimensional tensor and the second initial three-dimensional tensor into a preset convolution long-short time memory network, extracting the space-time characteristics of the target detection object through the convolution long-short time memory network, and generating a corresponding first target three-dimensional tensor and a corresponding second target three-dimensional tensor.
5. The three-dimensional imaging method based on two-way image acquisition according to claim 4, wherein the fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object, includes:
fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor;
giving voxel values to the fusion three-dimensional tensor to obtain three-dimensional voxel data;
and carrying out voxel rendering on the three-dimensional voxel data to obtain a target three-dimensional imaging model of the target detection object.
6. A three-dimensional imaging system for medical images based on two-way image acquisition, the three-dimensional imaging system for medical images based on two-way image acquisition comprising:
the acquisition module is used for acquiring two paths of medical image data of the target detection object through two imaging devices with different visual angles respectively to obtain first medical image data and second medical image data;
the computing module is used for respectively computing a first focusing image sequence of the first medical image data and a second focusing image sequence of the second medical image data through a preset digital refocusing algorithm; the method specifically comprises the following steps: performing gray level image conversion on the first medical image data and the second medical image data to obtain first gray level image data and second gray level image data; performing image correction on the first gray scale image data and the second gray scale image data to obtain first correction image data and second correction image data; downsampling the first correction image data and the second correction image data for a plurality of times to generate a corresponding first image pyramid and a corresponding second image pyramid; taking the first image pyramid as first main image data, calculating the displacement between the second image pyramid and the first main image data to obtain first displacement data, and finding an image corresponding to a depth position in the first image pyramid by adopting a pixel interpolation method according to the first displacement data to obtain a first initial image sequence; performing depth blur reconstruction and result integration on the first initial image sequence to obtain a first focusing image sequence, wherein the first focusing image sequence comprises a plurality of first refocusing images; taking the second image pyramid as second main image data, calculating displacement between the first image pyramid and the second main image data to obtain second displacement data, and finding an image corresponding to a depth position in the second image pyramid by adopting a pixel interpolation method according to the second displacement data to obtain a second initial image sequence; performing depth blur reconstruction and result integration on the second initial image sequence to obtain a second focused image sequence, wherein the second focused image sequence comprises a plurality of second focused images;
The processing module is used for calculating a first optimal refocusing coefficient of the first focusing image sequence and a second optimal refocusing coefficient of the second focusing image sequence through a preset definition evaluation function; the method specifically comprises the following steps: respectively calculating gradient images of a plurality of first refocusing images in the first focusing image sequence by adopting a second order gradient operator to obtain a first gradient image of each first refocusing image, and respectively calculating gradient images of a plurality of second refocusing images in the second focusing image sequence to obtain a second gradient image of each second refocusing image; calculating a first definition evaluation value corresponding to each first gradient image through a preset definition evaluation function, and calculating a second definition evaluation value corresponding to each second gradient image through the definition evaluation function; calculating a first optimal refocusing coefficient of the first focused image sequence according to the first definition evaluation value, and calculating a second optimal refocusing coefficient of the second focused image sequence according to the second definition evaluation value;
the calibration module is used for constructing a first depth calibration curve according to the first optimal refocusing coefficient and constructing a second depth calibration curve according to the second optimal refocusing coefficient;
The analysis module is used for determining first spatial position information of the target detection object in the first medical image data according to the first depth calibration curve and determining second spatial position information of the target detection object in the second medical image data according to the second depth calibration curve;
the construction module is used for constructing a first target three-dimensional tensor according to the first space position information and the first focusing image sequence, and constructing a second target three-dimensional tensor according to the second space position information and the second focusing image sequence;
the imaging module is used for fusing the first target three-dimensional tensor and the second target three-dimensional tensor to obtain a fused three-dimensional tensor, and performing three-dimensional imaging on the fused three-dimensional tensor to obtain a target three-dimensional imaging model of the target detection object.
7. A three-dimensional imaging apparatus for medical image based on two-way image acquisition, characterized in that the three-dimensional imaging apparatus for medical image based on two-way image acquisition comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the two-way image acquisition based medical image three-dimensional imaging device to perform the two-way image acquisition based medical image three-dimensional imaging method of any one of claims 1-5.
8. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the two-way image acquisition based medical image three-dimensional imaging method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311365240.7A CN117095137B (en) | 2023-10-20 | 2023-10-20 | Three-dimensional imaging method and system of medical image based on two-way image acquisition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311365240.7A CN117095137B (en) | 2023-10-20 | 2023-10-20 | Three-dimensional imaging method and system of medical image based on two-way image acquisition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117095137A CN117095137A (en) | 2023-11-21 |
CN117095137B true CN117095137B (en) | 2023-12-22 |
Family
ID=88780252
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311365240.7A Active CN117095137B (en) | 2023-10-20 | 2023-10-20 | Three-dimensional imaging method and system of medical image based on two-way image acquisition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117095137B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465952A (en) * | 2020-11-28 | 2021-03-09 | 重庆邮电大学 | Light field camera micro-nano associated imaging sensing-based reconstruction method |
CN113808019A (en) * | 2021-09-14 | 2021-12-17 | 广东三水合肥工业大学研究院 | Non-contact measurement system and method |
CN114494248A (en) * | 2022-04-01 | 2022-05-13 | 之江实验室 | Three-dimensional target detection system and method based on point cloud and images under different visual angles |
CN115049827A (en) * | 2022-05-19 | 2022-09-13 | 广州文远知行科技有限公司 | Target object detection and segmentation method, device, equipment and storage medium |
-
2023
- 2023-10-20 CN CN202311365240.7A patent/CN117095137B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465952A (en) * | 2020-11-28 | 2021-03-09 | 重庆邮电大学 | Light field camera micro-nano associated imaging sensing-based reconstruction method |
CN113808019A (en) * | 2021-09-14 | 2021-12-17 | 广东三水合肥工业大学研究院 | Non-contact measurement system and method |
CN114494248A (en) * | 2022-04-01 | 2022-05-13 | 之江实验室 | Three-dimensional target detection system and method based on point cloud and images under different visual angles |
CN115049827A (en) * | 2022-05-19 | 2022-09-13 | 广州文远知行科技有限公司 | Target object detection and segmentation method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117095137A (en) | 2023-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109741284B (en) | System and method for correcting respiratory motion-induced mismatches in PET imaging | |
JP6534998B2 (en) | Method and apparatus for displaying a medical image | |
CN107909622B (en) | Model generation method, medical imaging scanning planning method and medical imaging system | |
CN107886508B (en) | Differential subtraction method and medical image processing method and system | |
US10542955B2 (en) | Method and apparatus for medical image registration | |
CN102497821B (en) | Three-dimensional (3D) ultrasound imaging system for assessing scoliosis | |
RU2595757C2 (en) | Device to superimpose images | |
CN111540025B (en) | Predicting images for image processing | |
CN109389655B (en) | Reconstruction of time-varying data | |
EP2245592B1 (en) | Image registration alignment metric | |
CN105451657A (en) | System and method for navigating tomosynthesis stack including automatic focusing | |
CN103544690A (en) | Method for acquisition of subtraction angiograms | |
Zhang et al. | 3-D reconstruction of the spine from biplanar radiographs based on contour matching using the hough transform | |
US9763636B2 (en) | Method and system for spine position detection | |
US9355454B2 (en) | Automatic estimation of anatomical extents | |
CN117042695A (en) | Image-based planning of tomographic scans | |
CN106413568A (en) | Image processing apparatus, control method for image processing apparatus, and storage medium | |
US11495346B2 (en) | External device-enabled imaging support | |
CN111275617B (en) | Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium | |
CN117095137B (en) | Three-dimensional imaging method and system of medical image based on two-way image acquisition | |
Moura et al. | Real-scale 3D models of the scoliotic spine from biplanar radiography without calibration objects | |
US20190251691A1 (en) | Information processing apparatus and information processing method | |
JP5632920B2 (en) | System and method for determining blur characteristics in a blurred image | |
JP2015136480A (en) | Three-dimensional medical image display control device and operation method for the same, and three-dimensional medical image display control program | |
KR20180115122A (en) | Image processing apparatus and method for generating virtual x-ray image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |