CN114974518A - Multi-mode data fusion lung nodule image recognition method and device - Google Patents

Multi-mode data fusion lung nodule image recognition method and device Download PDF

Info

Publication number
CN114974518A
CN114974518A CN202210393642.7A CN202210393642A CN114974518A CN 114974518 A CN114974518 A CN 114974518A CN 202210393642 A CN202210393642 A CN 202210393642A CN 114974518 A CN114974518 A CN 114974518A
Authority
CN
China
Prior art keywords
data
pet
database
fusion
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210393642.7A
Other languages
Chinese (zh)
Inventor
陈敬
洪东升
刘晓健
卢晓阳
唐林娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210393642.7A priority Critical patent/CN114974518A/en
Publication of CN114974518A publication Critical patent/CN114974518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5223Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data generating planar views from image data, e.g. extracting a coronal view from a 3D image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5235Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from the same or different ionising radiation imaging techniques, e.g. PET and CT
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Abstract

The embodiment of the invention provides a lung nodule image recognition method and device based on multi-mode data fusion, wherein the method comprises the following steps: acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends, and constructing multi-modal fusion analysis data according to the data trends; constructing a corresponding multi-mode fusion analysis module by combining a pre-stored multi-mode algorithm; scanning a part to be inspected by a PET-CT scanning device and a PET-MR scanning device, inputting detection data into a multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data; and sending the fused image data to a receiving terminal of the associated medical staff. By adopting the method, different scanning images can be automatically fused into the image under the same space coordinate through the fusion of PET-MRI and PET-CT, so that the scanning mode is not single any more, the scanning accuracy is improved, and more scientific and accurate treatment is brought to patients.

Description

Multi-mode data fusion lung nodule image recognition method and device
Technical Field
The invention relates to the technical field of medical treatment, in particular to a lung nodule image identification method and device based on multi-mode data fusion.
Background
The lung cancer is the most common tumor in the society at present, and the morbidity and the mortality are in the leaders of malignant tumors, thereby seriously threatening the physical health of people. Effective methods for improving the survival rate of lung cancer are early discovery, early diagnosis and early treatment. With the development of early lung cancer screening, the mortality rate of lung cancer is reduced by 20%. The lung nodules are the main manifestation of early lung cancer, and are focal, roundish-like, densely populated or subvisive lung shadows of less than 3cm in diameter. However, the characteristics of some lesions overlap, and it is often difficult to determine the lung nodule morphology, edges, density and enhanced features by simple conventional examination, so an examination means for accurately detecting, quantitatively and qualitatively diagnosing lung nodules needs to be sought.
The imaging is an important way for diagnosing the benign and malignant lung nodules, and has the advantages of no wound, low risk, easy operation and the like compared with the puncture biopsy. With the continuous update of imaging equipment, the characteristics of the lung nodules which cannot be displayed originally are displayed on new equipment, and the conventional CT examination is difficult to identify the benign and malignant lung nodules. The fluorodeoxyglucose 18F-FDG PET/CT diagnoses the pulmonary nodules according to the metabolic level of the nodules, the imaging can quantify the glucose metabolic rate of cells, the metabolic condition of tissues and organs is reflected on the molecular level, and the malignant tissue metabolism is actively and highly concentrated in radioactivity, so that the existence of the malignant nodules is suggested. SUVmax is the most commonly used metabolic parameter of PET/CT, is a semi-quantitative parameter based on the tumor metabolic degree, reflects the metabolic activity of the tumor tissue at the position with the highest 18F-FDG uptake, and SUVmean is used as the supplement of SUVmax and reflects the average metabolic activity of tumor foci. As FDG is not a tumor specific tracer, a plurality of benign lesions in the lung, such as granuloma, inflammation, tuberculosis and the like, can take up FDG and show high metabolism (SUVmax >2.5), and for nodules with the diameter of less than 1.0cm, the difficulty of identifying benign and malignant lesions by simply relying on SUVmax is higher, so that the conventional 18F-FDG PET/CT display has certain difficulty in identifying benign and malignant lesions. Clinical case 1: the PET/CT imaging of the patient shows that the lung tip of the right lung has irregular flaky shadows, the larger section is about 1.0 x 1.4cm, the internal density is uneven, soft tissue density and thick and high-density shadows can be seen, the radioactive uptake of the far-end soft tissue part is increased, the SUV maximum value is about 5.2, and the malignant tumor lesion is considered along with peripheral long burrs and pleura traction. The postoperative pathology suggests lung parenchymal fibrous tissue hyperplasia with granuloma formation and negative bronchial margin, and the PET/CT image is shown in FIG. 2.
Case 2: PET/CT imaging shows that the textures of both lungs are increased and thickened; solid nodules of the right upper lung lobe segment, ranging from about 0.7 x 0.7cm, did not show increased radioactive uptake; the apical part of the left lung and the anterior part of the right lung are located at the apex of the left lung and around 0.9 x 0.9cm, respectively, and no increase in radioactive uptake is observed. Postoperative pathology suggested invasive adenocarcinoma, and the PET/CT image is shown in FIG. 3.
Based on the above situation, there is a need to assist other examinations to improve the diagnosis rate of lung nodules.
Functional magnetic resonance Imaging (PWI) can reflect tissue micro-hemodynamic information, and Diffusion-Weighted Imaging (Diffusion-Weighted Imaging) is the only technology capable of detecting the Diffusion motion of water molecules in living tissues non-invasively at present. The diagnosis of lesion properties by combining patient functional magnetic resonance (PWI, DWI) features and measuring Apparent Diffusion Coefficients (ADCs) using a post-magnetic resonance processing workstation is the direction of research. Scholars at home and abroad prove that ADC and SUV have negative correlation in various other tumors and interpret the negative correlation as the correlation between glucose metabolism information and the compactness of tumor cells, and the diagnosis mode combining functional magnetic resonance imaging and metabolic imaging provides a basis for later differential diagnosis of malignant tumors. Although the conventional MRI has a certain specific diagnostic value, the conventional MRI cannot reflect the metabolic activity of the tumor, and the PET-MRI can accurately reflect the metabolism, density and angiogenesis conditions of the tumor cells, thereby providing great help for clinical diagnosis, prognostic evaluation and tumor pathophysiology research.
PET-MRI has been challenging to apply clinically due to low proton density in the lungs, rapid decay of the T2 signal, and non-uniform magnetic field. The PET-MRI has the advantages of multi-sequence and multi-parameter imaging and the like, can reduce heart and respiratory motion artifacts to different degrees, has small influence on the respiratory motion of the detected solid pulmonary nodule with the diameter of 3-4 mm, can provide various functional information such as perfusion and diffusion, has multi-parameter imaging and no radiation, and provides more information for differential diagnosis of the pulmonary nodule. Clinical cases: post-rectal cancer, lung CT was reviewed periodically: the outer basal segment of the left lung inferior lobe (Se4, Im214) can see a solid nodular high-density image with clear boundary, the size is about 10X 7mm, and the density is uniform. The dorsal segment of the left lung inferior lobe (Se4, Im160) was seen as a frosted glass nodule with an average diameter of about 5mm and slightly blurred borders. The lung nodule properties can be characterized. PET/CT is added to show that nodule foci can be seen on the posterior segment of the upper lobe tip of the left lung and the external basal segment of the lower lobe of the left lung, the boundary is clear, the size of the greater segment is about 1.17 x 1.0cm, the segment is slightly adhered to the adjacent pleura and oblique fissure, the radioactive uptake is increased, and the SUV maximum value is about 3.86; PET/MRI imaging shows that nodule foci are visible at the posterior segment of the upper lobe tip of the left lung and the external basal segment of the lower lobe of the left lung, the boundary is clear, T2 shows high signal, DWI shows high signal, the radioactive uptake is increased, and the SUV maximum value is about 3.86; in combination with PET/CT and PET/MRI, the pulmonary nodules are firstly considered to be metastasis, and FIG. 4 is PET-CT images and PET/MRI images respectively, but currently, scanning data from various sources are adopted for empirical comparison, so that the accuracy is not high enough, errors are easily caused, a unified standard is difficult to form for the qualitative determination of the pulmonary nodules, and the labor burden of medical workers is increased invisibly.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a lung nodule image identification method and device based on multi-modal data fusion.
The embodiment of the invention provides a lung nodule image recognition method based on multi-mode data fusion, which comprises the following steps:
acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-modal fusion analysis data according to the data trends;
constructing a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm;
after the system is detected to be electrified, scanning a part to be examined through a PET-CT scanning device and a PET-MR scanning device to obtain scanned detection data;
inputting the detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data;
and sending the fused image data to a receiving terminal of a related medical worker, and uploading the fused image data to a cloud database.
In one embodiment, the method further comprises:
acquiring picture data contents in the first database data and the second database data, and determining the data tendency according to the picture data contents;
and determining the relationship among the picture data contents according to the distribution of the picture data contents, and constructing multi-mode fusion analysis data in the form of points and edges in the picture by combining the relationship of the picture data contents.
In one embodiment, the method further comprises:
and performing image rendering on the first database data and the second database data, wherein the image rendering identifies the geometric relationship between the lesion site and the tissues around the lesion site in the first database data and the second database data, and the geometric relationship is displayed in a three-dimensional form through the image rendering.
In one embodiment, the method further comprises:
acquiring a pre-stored algorithm set, sequentially calculating the algorithms in the algorithm set and the multi-mode fusion analysis data, and extracting the optimal algorithm in the algorithm set according to the calculation result;
and constructing a corresponding multi-modal fusion analysis module according to the multi-modal fusion analysis data and by combining the optimal algorithm.
In one embodiment, the method further comprises:
performing data preprocessing on the detection data, wherein the data preprocessing comprises: and (4) data cleaning and data normalization.
The embodiment of the invention provides a lung nodule image recognition device based on multi-mode data fusion, which comprises:
the system comprises an acquisition module, a fusion analysis module and a fusion analysis module, wherein the acquisition module is used for acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends;
the building module is used for building a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm;
the detection module is used for scanning the part to be examined through the PET-CT scanning device and the PET-MR scanning device after detecting that the system is electrified to obtain scanned detection data;
the input module is used for inputting the detection data into the multi-mode fusion analysis module and outputting corresponding fusion image data according to the data sequence of the detection data;
and the sending module is used for sending the fused image data to a receiving terminal of a related medical worker and uploading the fused image data to a cloud database.
In one embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring the picture data content in the first database data and the second database data and determining the data tendency according to the picture data content;
and the second construction module is used for determining the relationship among the picture data contents according to the distribution of the picture data contents, and constructing multi-mode fusion analysis data in the form of points and edges in the picture by combining the relationship among the picture data contents.
In one embodiment, the apparatus further comprises:
and the rendering module is used for performing image rendering on the first database data and the second database data, identifying the geometric relationship between the lesion part and the tissues around the lesion part in the first database data and the second database data through the image rendering, and displaying the geometric relationship in a three-dimensional form through the image rendering.
The embodiment of the invention provides electronic equipment, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the multi-modal data fusion lung nodule image identification method.
An embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the lung nodule image recognition method based on multi-modal data fusion.
The embodiment of the invention provides a lung nodule image identification method and device based on multi-mode data fusion, which comprises the steps of obtaining first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends; constructing a corresponding multi-mode fusion analysis module by combining a pre-stored multi-mode algorithm according to the multi-mode fusion analysis data; after the system is detected to be electrified, scanning a part to be examined through a PET-CT scanning device and a PET-MR scanning device to obtain scanned detection data; inputting detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data; and sending the fused image data to a receiving terminal of the associated medical personnel, and uploading the fused image data to a cloud database. Therefore, different scanning images are automatically fused into images under the same space coordinate through the fusion of PET-MRI and PET-CT, so that the scanning mode is not single any more, the scanning accuracy is improved, and more scientific and accurate treatment is brought to patients.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a lung nodule image recognition method based on multi-modal data fusion according to an embodiment of the present invention;
FIG. 2 is a prior art PET/CT image;
FIG. 3 is another prior art PET/CT visualization;
FIG. 4 is a prior art PET-CT, PET/MRI contrast image;
fig. 5 is a block diagram of a lung nodule image recognition apparatus with multi-modal data fusion according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow diagram of a method for recognizing lung nodule images through multi-modal data fusion according to an embodiment of the present invention, and as shown in fig. 1, the embodiment of the present invention provides a method for recognizing lung nodule images through multi-modal data fusion, including:
step S101, acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends.
In particular, the PET-CT scanning device is internally provided with a PET and CT fusion module which is a PET-CT homologous machine, the PET is fully called as positron emission computed tomography, and is imaging equipment reflecting the gene, molecule, metabolism and functional states of lesions, the CT is fully called as electronic computer tomography, the PET-CT fusion image is generated by using X-rays to carry out tomography on a human body and an image obtained from the PET-CT homologous machine, the PET-CT fusion image is stored in a first database, the PET and MRI fusion module is arranged in the PET-MRI scanning device, the MRI is a multi-parameter and multi-sequence image, has good soft tissue anatomical resolution, can easily distinguish soft tissues such as brain, muscle, heart and tumor, and the like, and is generated from the image obtained from the PET-MRI homologous machine, the PET-MRI fusion image is stored in a second database, and then data trends of the first database data and the second database data are determined, generally speaking, the first database data reflects tissue organ metabolism from the molecular level, the data tend to the tissue organ metabolism by quantifying the glucose metabolism rate of cells, the second database data reflects cell movement in the tissue organ from the molecular level, and can provide information such as blood flow, distribution, perfusion, local biochemistry, oxygen consumption and the like in the interior of a nodule, the data tend to the cell movement in the organ, then multi-modal fusion analysis data are constructed according to the data trends, for example, the relation between picture data contents is determined according to the distribution of the picture data contents of the first database and the second database, and the multi-modal fusion analysis data are constructed in the form of points and edges in the picture by combining the relation of the picture data contents, the relationship of the data contents of the pictures can be a complementary relationship between the pictures, for example, the first database data is prone to tissue organ metabolism, and the second database data is prone to cell movement in the organ, and the two data generate new multi-modal fusion analysis data through complementation.
In addition, image rendering can be performed on the first database data and the second database data, the image rendering is mainly used for identifying the geometric relationship between the lesion part and the tissues around the lesion part in the first database data and the second database data, the geometric relationship is displayed in a three-dimensional form through the image rendering, and the multi-mode fusion analysis data can be conveniently constructed subsequently.
And S102, constructing a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data and by combining a pre-stored multi-mode algorithm.
Specifically, new multi-modal fusion analysis data are generated through complementation, pre-stored multi-modal algorithms are combined, data from different modalities and relations among the data are stored in the same graph structure in a point-edge mode through the multi-modal algorithms, fusion modeling of the data of different modalities is achieved from a storage layer of a database, and a multi-modal fusion analysis module is generated, wherein the multi-modal algorithms can comprise a neural network algorithm, a genetic algorithm and the like, a pre-stored algorithm set can be obtained, the algorithms in the algorithm set and the multi-modal fusion analysis data are sequentially calculated, an optimal algorithm in the algorithm set is extracted according to a calculation result, and then the corresponding multi-modal fusion analysis module is constructed according to the multi-modal fusion analysis data and the optimal algorithm.
And step S103, after the system is detected to be electrified, scanning the part to be inspected through the PET-CT scanning device and the PET-MR scanning device to obtain scanned detection data.
Specifically, after the image recognition system is detected to be powered on, if it is indicated that a patient needs to be scanned, the PET-CT scanning device and the PET-MR scanning device are used to scan the part of the patient to be examined, so as to obtain scanned data, including image data scanned by the PET-CT scanning device and image data scanned by the PET-MR scanning device.
In addition, after the scanned detection data, data preprocessing may be performed on the detection data, where the data preprocessing includes: and data cleaning and data normalization are carried out, so that two scanned image data can be conveniently input into the multimode fusion analysis module in a follow-up manner.
And step S104, inputting the detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data.
Specifically, the detection data is input into the multi-modal fusion analysis module, the multi-modal fusion analysis module generates new multi-modal fusion image data obtained by complementing the image data scanned by the PET-CT scanning device and the image data scanned by the PET-MR scanning device, and the fusion image data generated by the multi-modal fusion analysis module is output according to the data sequence of the detected image data.
And S105, sending the fused image data to a receiving terminal of a related medical worker, and uploading the fused image data to a cloud database.
Specifically, with data transmission to information transmission module and with data input to record module, record module uploads the data backup to high in the clouds database and takes notes to data send to relevant medical personnel's data receiving terminal is in order to supply medical personnel to carry out the analysis and use.
The lung nodule image identification method based on multi-mode data fusion provided by the embodiment of the invention comprises the steps of obtaining first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends; constructing a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm; after the system is detected to be electrified, scanning a part to be examined through a PET-CT scanning device and a PET-MR scanning device to obtain scanned detection data; inputting detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data; and sending the fused image data to a receiving terminal of the associated medical personnel, and uploading the fused image data to a cloud database. Therefore, different scanning images are automatically fused into images under the same space coordinate through the fusion of PET-MRI and PET-CT, so that the scanning mode is not single any more, the scanning accuracy is improved, and more scientific and accurate treatment is brought to patients.
Fig. 5 is a lung nodule image recognition apparatus with multi-modal data fusion according to an embodiment of the present invention, including: an obtaining module S201, a constructing module S202, a detecting module S203, an inputting module S204, and a sending module S205, wherein:
the acquisition module S201 is configured to acquire first database data corresponding to the PET-CT scanning apparatus and second database data corresponding to the PET-MRI scanning apparatus, determine data trends of the first database data and the second database data, and construct multi-modal fusion analysis data according to the data trends.
And the building module S202 is used for building a corresponding multi-modal fusion analysis module according to the multi-modal fusion analysis data and by combining a pre-stored multi-modal algorithm.
And the detection module S203 is used for scanning the part to be examined through the PET-CT scanning device and the PET-MR scanning device after the system is detected to be electrified, so as to obtain scanned detection data.
And the input module S204 is used for inputting the detection data into the multi-modal fusion analysis module and outputting corresponding fusion image data according to the data sequence of the detection data.
And the sending module S205 is used for sending the fused image data to a receiving terminal of a related medical worker and uploading the fused image data to a cloud database.
In one embodiment, the apparatus may further comprise:
and the second acquisition module is used for acquiring the picture data contents in the first database data and the second database data and determining the data tendency according to the picture data contents.
And the second construction module is used for determining the relationship among the picture data contents according to the distribution of the picture data contents, and constructing multi-modal fusion analysis data in the form of points and edges in the picture by combining the relationship of the picture data contents.
In one embodiment, the apparatus may further comprise:
and the rendering module is used for performing image rendering on the first database data and the second database data, identifying the geometric relationship between the lesion part and the tissues around the lesion part in the first database data and the second database data through the image rendering, and displaying the geometric relationship in a three-dimensional form through the image rendering.
In one embodiment, the apparatus may further comprise:
a preprocessing module, configured to perform data preprocessing on the detection data, where the data preprocessing includes: and (4) cleaning and normalizing data.
For specific definition of the lung nodule image recognition apparatus based on multi-modal data fusion, reference may be made to the above definition of the lung nodule image recognition method based on multi-modal data fusion, and details are not repeated herein. All or part of the modules in the multi-modal data fusion lung nodule image recognition device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)301, a memory (memory)302, a communication Interface (Communications Interface)303 and a communication bus 304, wherein the processor 301, the memory 302 and the communication Interface 303 are configured to communicate with each other via the communication bus 304. The processor 301 may call logic instructions in the memory 302 to perform the following method: acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends; constructing a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm; after the system is detected to be electrified, scanning a part to be examined through a PET-CT scanning device and a PET-MR scanning device to obtain scanned detection data; inputting detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data; and sending the fused image data to a receiving terminal of the associated medical personnel, and uploading the fused image data to a cloud database.
Furthermore, the logic instructions in the memory 302 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, an embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the transmission method provided in the foregoing embodiments when executed by a processor, and for example, the method includes: acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends; constructing a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm; after the system is detected to be electrified, scanning a part to be examined through a PET-CT scanning device and a PET-MR scanning device to obtain scanned detection data; inputting detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data; and sending the fused image data to a receiving terminal of the associated medical personnel, and uploading the fused image data to a cloud database.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A lung nodule image recognition method based on multi-modal data fusion is characterized by comprising the following steps:
acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-modal fusion analysis data according to the data trends;
constructing a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm;
after the system is detected to be electrified, scanning a part to be examined through a PET-CT scanning device and a PET-MR scanning device to obtain scanned detection data;
inputting the detection data into the multi-mode fusion analysis module, and outputting corresponding fusion image data according to the data sequence of the detection data;
and sending the fused image data to a receiving terminal of a related medical worker, and uploading the fused image data to a cloud database.
2. The method of claim 1, wherein the determining data trends in the first and second database data and constructing multi-modal fusion analysis data based on the data trends comprises:
acquiring picture data contents in the first database data and the second database data, and determining the data tendency according to the picture data contents;
and determining the relationship among the picture data contents according to the distribution of the picture data contents, and constructing multi-mode fusion analysis data in the form of points and edges in the picture by combining the relationship of the picture data contents.
3. The method for lung nodule image recognition based on multi-modality data fusion according to claim 1, wherein the acquiring the first database data corresponding to the PET-CT scanner and the second database data corresponding to the PET-MRI scanner further comprises:
and performing image rendering on the first database data and the second database data, wherein the image rendering identifies the geometric relationship between the lesion site and the tissues around the lesion site in the first database data and the second database data, and the geometric relationship is displayed in a three-dimensional form through the image rendering.
4. The method for lung nodule image recognition through multi-modal data fusion according to claim 1, wherein the corresponding multi-modal fusion analysis module is constructed according to the multi-modal fusion analysis data and by combining a pre-stored multi-modal algorithm;
acquiring a pre-stored algorithm set, sequentially calculating the algorithms in the algorithm set and the multi-mode fusion analysis data, and extracting the optimal algorithm in the algorithm set according to the calculation result;
and constructing a corresponding multi-modal fusion analysis module according to the multi-modal fusion analysis data and by combining the optimal algorithm.
5. The method for lung nodule image recognition based on multi-modal data fusion as claimed in claim 1, further comprising, after obtaining the scanned detection data:
performing data preprocessing on the detection data, wherein the data preprocessing comprises: and (4) data cleaning and data normalization.
6. A multi-modality data fused lung nodule image recognition apparatus, the apparatus comprising:
the system comprises an acquisition module, a fusion analysis module and a fusion analysis module, wherein the acquisition module is used for acquiring first database data corresponding to a PET-CT scanning device and second database data corresponding to a PET-MRI scanning device, determining data trends of the first database data and the second database data, and constructing multi-mode fusion analysis data according to the data trends;
the building module is used for building a corresponding multi-mode fusion analysis module according to the multi-mode fusion analysis data by combining a pre-stored multi-mode algorithm;
the detection module is used for scanning the part to be examined through the PET-CT scanning device and the PET-MR scanning device after detecting that the system is electrified to obtain scanned detection data;
the input module is used for inputting the detection data into the multi-mode fusion analysis module and outputting corresponding fusion image data according to the data sequence of the detection data;
and the sending module is used for sending the fused image data to a receiving terminal of a relevant medical worker and uploading the fused image data to a cloud database.
7. The multi-modal data fused lung nodule image recognition apparatus as recited in claim 6, further comprising:
the second acquisition module is used for acquiring the picture data content in the first database data and the second database data and determining the data tendency according to the picture data content;
and the second construction module is used for determining the relationship among the picture data contents according to the distribution of the picture data contents, and constructing multi-modal fusion analysis data in the form of points and edges in the picture by combining the relationship of the picture data contents.
8. The multi-modal data fused lung nodule image recognition apparatus as recited in claim 6, further comprising:
and the rendering module is used for performing image rendering on the first database data and the second database data, identifying the geometric relationship between the lesion part and the tissues around the lesion part in the first database data and the second database data through the image rendering, and displaying the geometric relationship in a three-dimensional form through the image rendering.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for lung nodule image recognition based on multi-modal data fusion as claimed in any one of claims 1 to 5 when executing the program.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the multi-modal data fused lung nodule image recognition method as claimed in any one of claims 1 to 5.
CN202210393642.7A 2022-04-15 2022-04-15 Multi-mode data fusion lung nodule image recognition method and device Pending CN114974518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210393642.7A CN114974518A (en) 2022-04-15 2022-04-15 Multi-mode data fusion lung nodule image recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210393642.7A CN114974518A (en) 2022-04-15 2022-04-15 Multi-mode data fusion lung nodule image recognition method and device

Publications (1)

Publication Number Publication Date
CN114974518A true CN114974518A (en) 2022-08-30

Family

ID=82978184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210393642.7A Pending CN114974518A (en) 2022-04-15 2022-04-15 Multi-mode data fusion lung nodule image recognition method and device

Country Status (1)

Country Link
CN (1) CN114974518A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071740A (en) * 2023-03-06 2023-05-05 深圳前海环融联易信息科技服务有限公司 Invoice identification method, computer equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104523287A (en) * 2015-01-12 2015-04-22 天津医科大学第二医院 Stereoscopic orientation frame for PET, CT and MR image fusion of head and neck range
CN106204511A (en) * 2016-07-15 2016-12-07 西安交通大学第附属医院 A kind of two dimensional image and the three-dimensional fusion method of CT, MR image
US20190139236A1 (en) * 2016-12-28 2019-05-09 Shanghai United Imaging Healthcare Co., Ltd. Method and system for processing multi-modality image
CN109949404A (en) * 2019-01-16 2019-06-28 深圳市旭东数字医学影像技术有限公司 Based on Digital Human and CT and/or the MRI image three-dimensional rebuilding method merged and system
CN109938764A (en) * 2019-02-28 2019-06-28 佛山原子医疗设备有限公司 A kind of adaptive multiple location scan imaging method and its system based on deep learning
CN110533641A (en) * 2019-08-20 2019-12-03 东软医疗系统股份有限公司 A kind of multimodal medical image registration method and apparatus
CN111915596A (en) * 2020-08-07 2020-11-10 杭州深睿博联科技有限公司 Method and device for predicting benign and malignant pulmonary nodules
CN112164019A (en) * 2020-10-12 2021-01-01 珠海市人民医院 CT and MR scanning image fusion method
CN112184720A (en) * 2020-08-27 2021-01-05 首都医科大学附属北京同仁医院 Method and system for segmenting rectus muscle and optic nerve of CT image
CN112288683A (en) * 2020-06-30 2021-01-29 深圳市智影医疗科技有限公司 Pulmonary tuberculosis judgment device and method based on multi-mode fusion
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
CN112365587A (en) * 2020-12-01 2021-02-12 成都中创五联科技有限公司 System and method for multi-mode three-dimensional modeling of tomographic image suitable for auxiliary diagnosis and treatment
CN112488976A (en) * 2020-12-11 2021-03-12 华中科技大学 Multi-modal medical image fusion method based on DARTS network
CN112826590A (en) * 2021-02-02 2021-05-25 复旦大学 Knee joint replacement spatial registration system based on multi-modal fusion and point cloud registration
CN113140035A (en) * 2021-04-27 2021-07-20 青岛百洋智能科技股份有限公司 Full-automatic human cerebral vessel reconstruction method and device based on multi-modal image fusion technology
CN113674251A (en) * 2021-08-25 2021-11-19 北京积水潭医院 Lumbar vertebra image classification and identification system, equipment and medium based on multi-mode images
WO2022063199A1 (en) * 2020-09-24 2022-03-31 上海健康医学院 Pulmonary nodule automatic detection method, apparatus and computer system
CN112750097B (en) * 2021-01-14 2022-04-05 中北大学 Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104523287A (en) * 2015-01-12 2015-04-22 天津医科大学第二医院 Stereoscopic orientation frame for PET, CT and MR image fusion of head and neck range
CN106204511A (en) * 2016-07-15 2016-12-07 西安交通大学第附属医院 A kind of two dimensional image and the three-dimensional fusion method of CT, MR image
US20190139236A1 (en) * 2016-12-28 2019-05-09 Shanghai United Imaging Healthcare Co., Ltd. Method and system for processing multi-modality image
CN109949404A (en) * 2019-01-16 2019-06-28 深圳市旭东数字医学影像技术有限公司 Based on Digital Human and CT and/or the MRI image three-dimensional rebuilding method merged and system
CN109938764A (en) * 2019-02-28 2019-06-28 佛山原子医疗设备有限公司 A kind of adaptive multiple location scan imaging method and its system based on deep learning
WO2021022752A1 (en) * 2019-08-07 2021-02-11 深圳先进技术研究院 Multimodal three-dimensional medical image fusion method and system, and electronic device
CN110533641A (en) * 2019-08-20 2019-12-03 东软医疗系统股份有限公司 A kind of multimodal medical image registration method and apparatus
CN112288683A (en) * 2020-06-30 2021-01-29 深圳市智影医疗科技有限公司 Pulmonary tuberculosis judgment device and method based on multi-mode fusion
CN111915596A (en) * 2020-08-07 2020-11-10 杭州深睿博联科技有限公司 Method and device for predicting benign and malignant pulmonary nodules
CN112184720A (en) * 2020-08-27 2021-01-05 首都医科大学附属北京同仁医院 Method and system for segmenting rectus muscle and optic nerve of CT image
WO2022063199A1 (en) * 2020-09-24 2022-03-31 上海健康医学院 Pulmonary nodule automatic detection method, apparatus and computer system
CN112164019A (en) * 2020-10-12 2021-01-01 珠海市人民医院 CT and MR scanning image fusion method
CN112365587A (en) * 2020-12-01 2021-02-12 成都中创五联科技有限公司 System and method for multi-mode three-dimensional modeling of tomographic image suitable for auxiliary diagnosis and treatment
CN112488976A (en) * 2020-12-11 2021-03-12 华中科技大学 Multi-modal medical image fusion method based on DARTS network
CN112750097B (en) * 2021-01-14 2022-04-05 中北大学 Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network
CN112826590A (en) * 2021-02-02 2021-05-25 复旦大学 Knee joint replacement spatial registration system based on multi-modal fusion and point cloud registration
CN113140035A (en) * 2021-04-27 2021-07-20 青岛百洋智能科技股份有限公司 Full-automatic human cerebral vessel reconstruction method and device based on multi-modal image fusion technology
CN113674251A (en) * 2021-08-25 2021-11-19 北京积水潭医院 Lumbar vertebra image classification and identification system, equipment and medium based on multi-mode images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘晓飞 等: "基于PET/CT的孤立性肺结节多模式影像对比研究", 《医学影像学杂志》 *
邱陈辉: "基于多尺度几何分析和稀疏表示理论的多模态生物医学影像融合技术研究", 《中国博士学位论文全文数据库医药卫生科技辑》 *
高峰 等: "一种多模态医学图像数据融合方法与应用", 《中国医疗设备》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116071740A (en) * 2023-03-06 2023-05-05 深圳前海环融联易信息科技服务有限公司 Invoice identification method, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US11501485B2 (en) System and method for image-based object modeling using multiple image acquisitions or reconstructions
US7935055B2 (en) System and method of measuring disease severity of a patient before, during and after treatment
US10339648B2 (en) Quantitative predictors of tumor severity
US9514530B2 (en) Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
JP5081390B2 (en) Method and system for monitoring tumor burden
CN103889328B (en) Perfusion imaging
CN110458837B (en) Image post-processing method and device, electronic equipment and storage medium
CN1895185B (en) Method for displaying information of checking region of checked object and influence of drug in vivo.
CN109498046A (en) The myocardial infarction quantitative evaluating method merged based on nucleic image with CT coronary angiography
CN111368827A (en) Medical image processing method, medical image processing device, computer equipment and storage medium
Li et al. Multi-modality cardiac image computing: A survey
CN111369675A (en) Three-dimensional visual model reconstruction method and device based on lung nodule visceral layer pleural projection
CN114974518A (en) Multi-mode data fusion lung nodule image recognition method and device
US11636589B2 (en) Identification of candidate elements in images for determination of disease state using atlas elements
KR102337031B1 (en) Medical image reconstruction apparatus and method for screening a plurality of types lung diseases
CN113706541B (en) Image processing method and device
Zhang et al. Mesenteric vasculature-guided small bowel segmentation on high-resolution 3D CT angiography scans
Kalapos et al. Automated T1 and T2 mapping segmentation on cardiovascular magnetic resonance imaging using deep learning
Quijano et al. The Predictive Value of 3D Imaging Reconstruction Models in Patients with Vascular Resection for Pancreatic Surgery.
US20080050001A1 (en) Use of Subsets of the Acquired Data to Improve the Diagnostic Outcome in Cardiac SPECT Imaging
Chacón et al. from multi-slice computerized tomography images in presence of type 2 cancer [version 2; referees: 2 approved, 1 not approved]
Chacón et al. Computational assessment of stomach tumor volume from multi-slice computerized tomography images in presence of type 2 cancer [version 2; referees: 1 approved, 1 not approved]
Carretero Pulmonary Artery-Vein Segmentation in Real and Synthetic CT Images
Qi et al. Magnetic Resonance Image under the Low-Rank Matrix Denoising Algorithm in Evaluating the Efficacy of Neoadjuvant Chemo-Radiotherapy for Rectal Cancer
Bandla et al. Artificial Intelligence in Healthcare with Special Consideration in Radio Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220830