CN114170133A - Intelligent focus positioning method, system, device and medium based on medical image - Google Patents

Intelligent focus positioning method, system, device and medium based on medical image Download PDF

Info

Publication number
CN114170133A
CN114170133A CN202111287559.3A CN202111287559A CN114170133A CN 114170133 A CN114170133 A CN 114170133A CN 202111287559 A CN202111287559 A CN 202111287559A CN 114170133 A CN114170133 A CN 114170133A
Authority
CN
China
Prior art keywords
image
medical image
dimensional
lung
lateral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111287559.3A
Other languages
Chinese (zh)
Inventor
龙显荣
叶一农
王威
邓东华
张锡林
黄戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Fourth People's Hospital (foshan Tuberculosis Control Institute)
Original Assignee
Foshan Fourth People's Hospital (foshan Tuberculosis Control Institute)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Fourth People's Hospital (foshan Tuberculosis Control Institute) filed Critical Foshan Fourth People's Hospital (foshan Tuberculosis Control Institute)
Priority to CN202111287559.3A priority Critical patent/CN114170133A/en
Publication of CN114170133A publication Critical patent/CN114170133A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an intelligent focus positioning method, system, equipment and medium based on medical images. The method comprises the steps of obtaining a normal medical image and a lateral medical image of a target to be processed; performing image segmentation processing on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image; performing focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data; performing focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data; orthogonal stacking is carried out on the two-dimensional normal position characteristic data and the two-dimensional side position characteristic data to obtain three-dimensional characteristic data; and obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data. The method can be used for mining more effective detection information from the images, and can be used for obtaining a more accurate focus positioning result by combining the medical images in multiple angles. The method and the device can be widely applied to the technical field of medical image processing.

Description

Intelligent focus positioning method, system, device and medium based on medical image
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a method, a system, an apparatus, and a medium for intelligent lesion localization based on medical images.
Background
With the development of computer vision technology, the computer aided detection technology can effectively reduce the workload of doctors, assist in completing the disease judgment based on medical images, improve the stability and efficiency of the disease judgment, and optimize the diagnosis process.
In the related art, for the localization of lung lesions, image segmentation and subsequent detection and identification are generally performed on medical images, so as to automatically detect whether lesion features appear in relevant parts of a human body. However, because the size and texture difference of the target lesion may be large, the currently adopted positioning method has the problem of insufficient accuracy, which often causes misdiagnosis.
In view of the above, there is a need to solve the technical problems in the related art.
Disclosure of Invention
The present application aims to solve at least one of the technical problems in the related art to some extent.
Therefore, an object of the embodiments of the present application is to provide an intelligent lesion positioning method based on medical images, which can more accurately perform lesion positioning, and is beneficial to improving the effect of auxiliary diagnosis.
It is another object of embodiments of the present application to provide an intelligent lesion localization system based on medical images.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the application comprises the following steps:
in a first aspect, an embodiment of the present application provides a method for intelligently locating a lesion based on a medical image, where the method includes the following steps:
acquiring a normal medical image and a lateral medical image of a target to be processed;
performing image segmentation processing on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image;
performing focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data;
performing focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data;
orthogonally stacking the two-dimensional normal position feature data and the two-dimensional side position feature data to obtain three-dimensional feature data;
and obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data.
In addition, the intelligent lesion positioning method based on medical images according to the above embodiments of the present application may further have the following additional technical features:
further, in an embodiment of the present application, after acquiring the normal medical image and the lateral medical image of the target to be processed, the method further includes:
preprocessing the orthostatic medical image and the lateral medical image; the preprocessing includes at least one of an image compression process, a scale normalization process, and a histogram equalization process.
Further, in an embodiment of the present application, the performing image segmentation processing on the orthotopic medical image to obtain an orthotopic lung image includes:
inputting the orthotopic medical image into an image segmentation model to obtain a target area and a target category output by the image segmentation model;
and intercepting the corresponding target region with the target category as the target region of the lung from the orthotopic medical image to obtain the orthotopic lung image.
Further, in an embodiment of the present application, the image segmentation model is a fast R-CNN model, and the capturing the corresponding target region of the target category as a target region of a lung from the orthotopic medical image to obtain the orthotopic lung image includes:
sequentially determining the probability of various target categories corresponding to the target area;
and taking the target region with the highest probability that the target category is the lung in the orthotopic medical image as an intercepting region, and intercepting to obtain the orthotopic lung image.
Further, in an embodiment of the present application, the orthogonally stacking the two-dimensional positive position feature data and the two-dimensional lateral position feature data to obtain three-dimensional feature data includes:
orthogonal stacking is carried out on two-dimensional normal position feature data and two-dimensional side position feature data obtained by processing a plurality of different detection models to obtain initial feature data corresponding to each detection model;
and weighting the initial characteristic data corresponding to all the detection models to obtain the three-dimensional characteristic data.
Further, in one embodiment of the present application, the method further comprises:
and inputting the three-dimensional characteristic data into a classification network to obtain a lung focus type prediction result of the target to be processed.
In a second aspect, the present application provides an intelligent lesion localization system based on medical images, the system including:
the acquisition module is used for acquiring a normal medical image and a lateral medical image of a target to be processed;
the segmentation module is used for carrying out image segmentation processing on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image;
the first processing module is used for carrying out focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data;
the second processing module is used for carrying out focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data;
the stacking module is used for orthogonally stacking the two-dimensional normal position feature data and the two-dimensional side position feature data to obtain three-dimensional feature data;
and the positioning module is used for obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data.
In a third aspect, an embodiment of the present application provides a computer device, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, cause the at least one processor to implement the medical image based intelligent lesion localization method of the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, in which a processor-executable program is stored, and the processor-executable program, when executed by a processor, is used for implementing the medical image-based intelligent lesion localization method according to the first aspect.
Advantages and benefits of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application:
according to the intelligent focus positioning method based on the medical images, provided by the embodiment of the application, the method comprises the steps of obtaining a normal medical image and a lateral medical image of a target to be processed; performing image segmentation processing on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image; performing focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data; performing focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data; orthogonally stacking the two-dimensional normal position feature data and the two-dimensional side position feature data to obtain three-dimensional feature data; and obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data. The method carries out positioning analysis on the focus based on the normal medical image and the lateral medical image, can mine more effective detection information from the image, and can obtain more accurate focus positioning results by combining the medical images in multiple angles.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present application or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of an intelligent lesion location method based on medical images according to the present application;
FIG. 2 is a schematic flow chart illustrating an embodiment of a method for intelligently locating a lesion based on medical images according to the present application;
FIG. 3 is a schematic structural diagram of an embodiment of an intelligent lesion localization system based on medical images according to the present application;
fig. 4 is a schematic structural diagram of an embodiment of an intelligent lesion locating device based on medical images according to the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
At present, with the development of computer vision technology, the computer aided detection technology can effectively reduce the workload of doctors, assist them in completing disease judgment based on medical images, improve the stability and efficiency of disease judgment, and optimize the diagnosis process. For example, pulmonary tuberculosis is a chronic infectious disease transmitted through the respiratory tract, and in recent years, the tuberculosis epidemic situation rises due to multiple drug-resistant tuberculosis, double infection of tubercle bacillus and AIDS virus and increase of mobile population, so that the health of the masses of people is seriously harmed, and the chronic infectious disease becomes a great public health problem and a social problem. While dr (digital radiology) chest radiographs play an important role in the primary screening process of pulmonary tuberculosis, aiming at the conditions of various pulmonary nodule medical signs and complex and diverse manifestations, doctors are difficult to accurately and stably judge when screening a large number of medical images, and the workload is huge and complicated.
With the development of computer vision, the computer aided detection technology can effectively reduce the workload of doctors, assist in completing the disease judgment based on medical images, improve the stability and efficiency of the disease judgment and optimize the diagnosis process. However, how to realize high-precision lung parenchyma segmentation and detection of a suspected lesion area are key tasks, and in the related technology, because the size and texture difference of a target lesion may be large, the currently adopted lesion positioning method has the problem of insufficient precision, which often causes misdiagnosis. There is a need for an intelligent lesion localization method to improve the accuracy and efficiency of detection.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment of a method provided in an embodiment of the present application. In fig. 1, the implementation environment includes an operation terminal 101, a server 102 and an intelligent lesion locating device 103, wherein the operation terminal 101 is in communication connection with the server 102. The intelligent lesion locating device 103 may be disposed in the operation terminal 101, or may be disposed in the server 102, and may be appropriately selected according to the actual application, which is not specifically limited in this embodiment, and fig. 1 illustrates an example in which the intelligent lesion locating device 103 is disposed in the operation terminal 101.
In the embodiment of the present application, the operation terminal 101 may include, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, and the like. Alternatively, the operation terminal 101 has a service client installed thereon, and the user can use various services provided by the operation terminal 101 through the service client. When the intelligent lesion locating device 103 is disposed in the operation terminal 101, the operation terminal 101 may execute the method of the present application according to the operation performed by the user through the service client, so as to locate the lung lesion of the target user; or, when the intelligent lesion locating device 103 is disposed in the server 102, the operation terminal 101 may send a corresponding operation instruction to the server 102 according to an operation performed by the user through the service client, so that the server 102 executes the method of the present application to locate the lung lesion of the target user, and returns a location result to the operation terminal 101 for display.
The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform.
The method in the embodiment of the present application may be applied to a computer device, and specifically may be implemented by storing the program code in a memory of the computer device and executing the program code by an associated processor, and the method in the embodiment of the present application is described and illustrated as being executed in the computer device. Of course, the method may also be executed in other types of terminal devices or servers, and the implementation principle is similar to that of the foregoing computer device, and is not described in detail here.
Specifically, referring to fig. 2, the method in the present application mainly includes steps 110 to 160:
step 110, acquiring a normal medical image and a lateral medical image of a target to be processed;
in this step, in executing the method in the embodiment of the present application, a chest X-ray image (CXR image) of the target to be processed may be acquired by using an associated medical device, wherein the image acquired on the front side of the target to be processed is referred to as a normal medical image, and the image acquired on the side of the target to be processed is referred to as a lateral medical image. Because the orthotopic medical images have tissue overlap, the image characteristics of the same focus of different patients have obvious difference due to individual difference, and meanwhile, the spatial resolution of the two-dimensional images is limited, and uncertainty often exists when the focus is positioned by adopting a single orthotopic medical image.
The lateral medical image can provide very intuitive information for distinguishing whether the thoracic cavity disease exists or not, although the lateral medical image is difficult to provide effective diagnosis information due to serious interference of overlapped tissues. The accuracy of lung disease diagnosis and positioning can be effectively improved by comparing the characteristics of the focus images of the orthostatic medical image and the lateral medical image. Therefore, based on orthostatic medical images and lateral medical images, the method can adopt a multi-channel deep neural network model to respectively extract the features of the multi-level lung focus, not only can extract the common feature patterns in the traditional scheme, but also can extract the features of the spatial position of the nodule and the texture features based on statistics and the like, thereby helping effectively segment the lung parenchyma and detecting the lung focus and improving the accuracy of the positioning result.
It should be noted that, in the embodiment of the present application, an actual acquisition channel of the normal medical image and the lateral medical image is not limited, and in some embodiments, the actual acquisition channel may be directly obtained from the relevant image acquisition device; in some embodiments, the communication may be obtained from other hardware or software components. In addition, in some embodiments, the acquired orthostatic medical image and lateral medical image may be preprocessed to improve image quality and facilitate subsequent positioning analysis processing. Specifically, the pre-processed content may include at least one of an image compression process, a scale normalization process, and a histogram equalization process. For example, the orthotopic medical image may be subjected to image compression processing, scale normalization processing, and histogram equalization processing in sequence, so as to obtain a preprocessed orthotopic medical image.
Step 120, performing image segmentation processing on the positive position medical image and the lateral position medical image to obtain a positive position lung image and a lateral position lung image;
in this step, image segmentation processing may be performed on the orthostatic medical image and the lateral medical image, and a lung image may be extracted from the orthostatic medical image, where the image obtained by segmentation from the orthostatic medical image may be recorded as an orthostatic lung image, and the image obtained by segmentation from the lateral medical image may be recorded as a lateral lung image. Specifically, taking the obtaining process of the orthotopic lung image as an example, in some possible embodiments, the orthotopic medical image may be input into an image segmentation model, and a target region and a target class output by the image segmentation model are obtained, where the target region refers to an image range of a target segmented by the model, and the target class is a class of tissues or organs and the like in the target region identified by the model. For example, in the embodiment of the present application, the image segmentation model that can be used is a fast R-CNN model, which can be obtained by pre-training based on a public data set, and taking the CXR image set as an example to train the image segmentation model, the lung region can be masked and labeled in the CXR image, so that the initial fast R-CNN model is trained based on the labeled image set, and a trained fast R-CNN model that can be put into use is obtained.
The Faster R-CNN model is a typical representation of a neural network applied to target detection, and the Faster R-CNN model trained by the image set has two outputs, one is a rectangular frame of a recognized target, namely a target area; another output is the probability that the objects within the rectangular box are of the respective predetermined object class. Generally speaking, the lung region can be identified by using the model alone, and a rectangular frame is generated, namely the target region of the identified lung is in the rectangular frame, and the orthostatic lung image can be obtained by cutting from the orthostatic medical image according to the target region. In some embodiments, when the model is used, multiple detection results may be output, that is, multiple rectangular boxes are output, and the target region of each rectangular box corresponds to the probability of various target categories, for example, in the a rectangular box, the probability of being a heart organ in the target region is 80%, and the probability of being a lung organ is 20%; in the B rectangle, the probability of being a heart organ within the target region is 10%, the probability of a lung organ is 80%, the probability of an arm is 10%, and so on. At this time, the target region with the highest probability that the target category is the lung (organ) in the orthotopic medical image may be used as a clipping region, from which the orthotopic lung image is clipped. It is understood that in the embodiment of the present application, the lateral lung image is obtained similarly to the normal lung image, and therefore, the description thereof is omitted here.
Step 130, performing focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data;
140, performing focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data;
150, orthogonally stacking the two-dimensional normal position feature data and the two-dimensional side position feature data to obtain three-dimensional feature data;
in this step, the orthostatic lung image obtained by the previous segmentation may be input into a feature detection network model, and focus detection processing and feature extraction may be performed to obtain two-dimensional orthostatic feature data. Here, the lesion detection processing refers to detecting regions in the orthotopic lung image where lesions may exist by a detection model, and feature extraction is to extract image features of these regions where lesions may exist. Feature detection network models that may be used herein include, but are not limited to, VGGNET, ResNet, DenseNet, GoogleNet, UNet, Mask R-CNN, and the like, which may be pre-trained using a publicly-based dataset, e.g., using medical images with lesion markings.
Specifically, when the models are trained, the existing orthotopic medical image and lateral medical image may be used, and these images may carry a lung lesion segmentation label and a lesion type classification label, where the lung lesion segmentation label is used to identify a region in the image where a lesion exists; the lesion type classification label is used for identifying a lesion type in the region, for example, the lesion type classification label may include, but is not limited to, a cavity, a nodule, hydrops, fibrosis, infiltration, effusion, emphysema, and the like, and the lesion type classification label may be encoded using one-hot encoding so as to construct a training data set.
After preprocessing the images, inputting the images into each feature detection network model as a training data set for model training, detecting the position of a lesion region by learning image features (including but not limited to texture, scale, gray scale, curvature, gradient features and the like) based on a lung region, and adjusting model parameters based on a lung lesion segmentation label, thereby obtaining model training capable of realizing lesion detection processing.
Further, based on the above-mentioned lesion detection processing, feature data of the segmented lesion region of interest may be extracted and recorded as two-dimensional normal feature data and two-dimensional lateral feature data, respectively. Here, the feature data may be image data itself, and two-dimensional feature data are orthogonally stacked to obtain three-dimensional feature data. Specifically, for example, three-dimensional image data is constructed according to two-dimensional image data, where, because there may be an inconsistency between the image scales of a positive medical image and a lateral medical image, it is necessary to perform bottom zero padding on a picture with a smaller longitudinal axis angle, and then repeatedly stack the two-dimensional lateral feature data according to the positive lateral axis resolution, and similarly, repeatedly stack the two-dimensional positive feature data according to the lateral axis resolution to construct three-dimensional image data. Similarly, it can be understood that the lung lesion segmentation labels may also be stacked in the same stacking manner to obtain corresponding three-dimensional label data.
And 160, obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data.
In this step, the three-dimensional feature data obtained by the previous stacking may be input into a conventional mainstream three-dimensional convolutional neural network, for example, the three-dimensional convolutional neural network may include 4 convolutional layers, the convolutional kernel is (3, 3, 3), the number of filters is 64, 128, 256, 256, and also includes 3 full-connection layers, softmax layers, and 1 output layer, each layer is connected by a pooling layer, and the pooling layers may all adopt a maximum pooling method. And processing the three-dimensional characteristic data through a three-dimensional convolution neural network to obtain a final lung focus positioning result. In some embodiments, the three-dimensional feature data may be further input into a trained classification network to obtain a prediction result of the lung lesion type of the target to be processed, where the prediction result is used to characterize a lesion type existing in the lung.
In some optional embodiments, when determining the three-dimensional feature data, the features may be extracted through a plurality of different feature detection network models, and the two-dimensional positive feature data and the two-dimensional lateral feature data obtained by each feature detection network model are orthogonally stacked to obtain initial feature data corresponding to each detection model, and then the initial feature data are subjected to weighted summation to obtain the three-dimensional feature data. Therefore, the feature extraction capability of each type of model can be integrated, the integrity and the usability of the obtained features are greatly improved, and the accuracy of final positioning is improved.
An intelligent lesion localization system based on medical images according to an embodiment of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 3, an intelligent lesion localization system based on medical images in an embodiment of the present application includes:
an obtaining module 201, configured to obtain a normal medical image and a lateral medical image of a target to be processed;
a segmentation module 202, configured to perform image segmentation on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image;
the first processing module 203 is configured to perform lesion detection processing and feature extraction on the orthostatic lung image to obtain two-dimensional orthostatic feature data;
the second processing module 204 is configured to perform focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data;
the stacking module 205 is configured to perform orthogonal stacking on the two-dimensional positive position feature data and the two-dimensional lateral position feature data to obtain three-dimensional feature data;
and the positioning module 206 is configured to obtain a lung lesion positioning result of the target to be processed according to the three-dimensional feature data.
It is to be understood that the contents in the foregoing method embodiments are all applicable to this system embodiment, the functions specifically implemented by this system embodiment are the same as those in the foregoing method embodiment, and the advantageous effects achieved by this system embodiment are also the same as those achieved by the foregoing method embodiment.
Referring to fig. 4, an embodiment of the present application provides an intelligent lesion locating device based on medical images, including:
at least one processor 301;
at least one memory 302 for storing at least one program;
the at least one program, when executed by the at least one processor 301, causes the at least one processor 301 to implement a method for intelligent lesion localization based on medical images.
Similarly, the contents in the above method embodiments are all applicable to the embodiment of the intelligent lesion positioning device based on medical images, the functions specifically implemented by the embodiment of the intelligent lesion positioning device based on medical images are the same as those in the above method embodiments, and the beneficial effects achieved by the embodiment of the method are also the same as those achieved by the above method embodiments.
An embodiment of the present application further provides a computer-readable storage medium, in which a program executable by the processor 301 is stored, and the program executable by the processor 301 is configured to perform the above-mentioned intelligent lesion localization method based on medical images when executed by the processor 301.
Similarly, the contents in the above method embodiments are all applicable to the computer-readable storage medium embodiments, the functions specifically implemented by the computer-readable storage medium embodiments are the same as those in the above method embodiments, and the beneficial effects achieved by the computer-readable storage medium embodiments are also the same as those achieved by the above method embodiments.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present application is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion regarding the actual implementation of each module is not necessary for an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the present application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the application, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: numerous changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.
While the present application has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An intelligent lesion positioning method based on medical images is characterized by comprising the following steps:
acquiring a normal medical image and a lateral medical image of a target to be processed;
performing image segmentation processing on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image;
performing focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data;
performing focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data;
orthogonally stacking the two-dimensional normal position feature data and the two-dimensional side position feature data to obtain three-dimensional feature data;
and obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data.
2. The intelligent lesion localization method based on medical images according to claim 1, wherein: after the acquiring of the orthostatic medical image and the lateral medical image of the target to be processed, the method further comprises the following steps:
preprocessing the orthostatic medical image and the lateral medical image; the preprocessing includes at least one of an image compression process, a scale normalization process, and a histogram equalization process.
3. The intelligent lesion localization method based on medical image as claimed in claim 1, wherein the image segmentation processing on the orthotopic medical image to obtain an orthotopic lung image comprises:
inputting the orthotopic medical image into an image segmentation model to obtain a target area and a target category output by the image segmentation model;
and intercepting the corresponding target region with the target category as the target region of the lung from the orthotopic medical image to obtain the orthotopic lung image.
4. The intelligent medical-image-based lesion localization method according to claim 3, wherein the image segmentation model is a fast R-CNN model, and the capturing the corresponding target region of the target category as a target region of a lung from the orthotopic medical image to obtain the orthotopic lung image comprises:
sequentially determining the probability of various target categories corresponding to the target area;
and taking the target region with the highest probability that the target category is the lung in the orthotopic medical image as an intercepting region, and intercepting to obtain the orthotopic lung image.
5. The intelligent medical image-based lesion localization method according to claim 1, wherein the orthogonally stacking the two-dimensional normal position feature data and the two-dimensional lateral position feature data to obtain three-dimensional feature data comprises:
orthogonal stacking is carried out on two-dimensional normal position feature data and two-dimensional side position feature data obtained by processing a plurality of different detection models to obtain initial feature data corresponding to each detection model;
and weighting the initial characteristic data corresponding to all the detection models to obtain the three-dimensional characteristic data.
6. The intelligent lesion localization method based on medical image according to any one of claims 1 to 5,
the method further comprises the following steps:
and inputting the three-dimensional characteristic data into a classification network to obtain a lung focus type prediction result of the target to be processed.
7. An intelligent lesion localization system based on medical images, comprising:
the acquisition module is used for acquiring a normal medical image and a lateral medical image of a target to be processed;
the segmentation module is used for carrying out image segmentation processing on the positive medical image and the lateral medical image to obtain a positive lung image and a lateral lung image;
the first processing module is used for carrying out focus detection processing and feature extraction on the righting lung image to obtain two-dimensional righting feature data;
the second processing module is used for carrying out focus detection processing and feature extraction on the lateral medical image to obtain two-dimensional lateral feature data;
the stacking module is used for orthogonally stacking the two-dimensional normal position feature data and the two-dimensional side position feature data to obtain three-dimensional feature data;
and the positioning module is used for obtaining a lung focus positioning result of the target to be processed according to the three-dimensional characteristic data.
8. An intelligent lesion locating device based on medical images, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the medical image based intelligent lesion localization method of any one of claims 1-6.
9. A computer-readable storage medium in which a program executable by a processor is stored, characterized in that: the processor executable program when executed by a processor is for implementing a medical image based intelligent lesion localization method according to any one of claims 1-6.
CN202111287559.3A 2021-11-02 2021-11-02 Intelligent focus positioning method, system, device and medium based on medical image Pending CN114170133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111287559.3A CN114170133A (en) 2021-11-02 2021-11-02 Intelligent focus positioning method, system, device and medium based on medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111287559.3A CN114170133A (en) 2021-11-02 2021-11-02 Intelligent focus positioning method, system, device and medium based on medical image

Publications (1)

Publication Number Publication Date
CN114170133A true CN114170133A (en) 2022-03-11

Family

ID=80477725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111287559.3A Pending CN114170133A (en) 2021-11-02 2021-11-02 Intelligent focus positioning method, system, device and medium based on medical image

Country Status (1)

Country Link
CN (1) CN114170133A (en)

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
CN110148142B (en) Training method, device and equipment of image segmentation model and storage medium
CN110097130B (en) Training method, device and equipment for classification task model and storage medium
Xu et al. Detecting tiny objects in aerial images: A normalized Wasserstein distance and a new benchmark
CN108615237B (en) Lung image processing method and image processing equipment
CN109685060B (en) Image processing method and device
US20170004619A1 (en) System and method for automatic pulmonary embolism detection
CN106716450A (en) Image-based feature detection using edge vectors
CN112150428A (en) Medical image segmentation method based on deep learning
WO2021016087A1 (en) Systems for the generation of source models for transfer learning to application specific models
CN112329871B (en) Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
Pant et al. Pneumonia detection: An efficient approach using deep learning
Shu et al. LVC-Net: Medical image segmentation with noisy label based on local visual cues
CN111754453A (en) Pulmonary tuberculosis detection method and system based on chest radiography image and storage medium
Liang et al. Dense networks with relative location awareness for thorax disease identification
CN112037212A (en) Pulmonary tuberculosis DR image identification method based on deep learning
Farhangi et al. Automatic lung nodule detection in thoracic CT scans using dilated slice‐wise convolutions
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN111476802B (en) Medical image segmentation and tumor detection method, equipment and readable storage medium
CN113592769A (en) Abnormal image detection method, abnormal image model training method, abnormal image detection device, abnormal image model training device and abnormal image model training medium
CN113724185A (en) Model processing method and device for image classification and storage medium
CN110503110A (en) Feature matching method and device
Yang et al. 3D multi‐view squeeze‐and‐excitation convolutional neural network for lung nodule classification
CN114170133A (en) Intelligent focus positioning method, system, device and medium based on medical image
WO2022227193A1 (en) Liver region segmentation method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination