CN116912224A - Focus detection method, focus detection device, storage medium and electronic equipment - Google Patents

Focus detection method, focus detection device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116912224A
CN116912224A CN202310922203.5A CN202310922203A CN116912224A CN 116912224 A CN116912224 A CN 116912224A CN 202310922203 A CN202310922203 A CN 202310922203A CN 116912224 A CN116912224 A CN 116912224A
Authority
CN
China
Prior art keywords
model
medical image
image
organ
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310922203.5A
Other languages
Chinese (zh)
Inventor
王雪纯
石峰
薛忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202310922203.5A priority Critical patent/CN116912224A/en
Publication of CN116912224A publication Critical patent/CN116912224A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The specification discloses a method, a device, a storage medium and an electronic device for detecting a focus, wherein first, a human body medical image of a patient is acquired, and an image of an organ to be detected in the human body medical image is determined. And then inputting the human medical image and the image of the organ to be detected into a first model to obtain a processing result of the image of the organ to be detected and an attention area of the first model, wherein the attention area is an area in the human medical image according to which the processing result is determined by the first model. Finally, based on the processing result, inputting the human medical image and the attention area into a second model, and determining the focus position in the human medical image. Because the first model is mainly based on the attention area in the human body medical image when the processing result of the human body medical image is determined, the method determines the focus of the patient based on the determined attention area when the focus position is determined, and the accuracy of the detected focus of the patient is greatly improved.

Description

Focus detection method, focus detection device, storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for detecting a lesion, a storage medium, and an electronic device.
Background
With the development of technology, medical technology has also rapidly developed. Diseases, especially cancers, cause great pain to humans and even lose lives, so that scientific medical examination is an important means for ensuring human health.
Generally, when a medical examination is performed on a human body, a CT image of the human body may be obtained using medical imaging equipment such as an electronic computed tomography (Computed Tomography, CT) and the like, and then a lesion position of the human body may be determined based on the CT image. However, the method of observing CT images by a doctor to determine the specific location of a human focus makes the detection efficiency low, and the accuracy of the detection result cannot be ensured.
Based on this, the present specification provides a method of detecting a lesion.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a storage medium, and an electronic device for detecting a lesion, so as to at least partially solve the foregoing problems in the prior art.
The technical scheme adopted in the specification is as follows:
the present specification provides a method of detecting a lesion, the method comprising:
Acquiring a human body medical image of a patient;
determining an image of an organ to be detected in the human medical image;
inputting the human medical image and the image of the organ to be detected into a first model to obtain a processing result and an attention area of the first model on the image of the organ to be detected; wherein the attention area is an area in the medical image of the human body according to which the first model determines the processing result;
and inputting the human medical image and the attention area into a second model based on the processing result, and determining the focus position in the human medical image.
Optionally, determining an image of the organ to be detected in the human medical image specifically includes:
inputting the human body medical image into an organ segmentation model to obtain an image of an organ to be detected in the human body medical image.
Optionally, before inputting the medical image of the human body and the image of the organ to be detected into the first model, the method further comprises:
acquiring clinical information of the patient;
inputting the human medical image and the image of the organ to be detected into a first model, wherein the method specifically comprises the following steps:
Inputting the clinical information, the human medical image and the image of the organ to be detected into a first model.
Optionally, the first model includes: the feature extraction layer and the result output layer; the feature extraction layer is used for extracting first features of the human medical image and the image of the organ to be detected, and inputting the first features into the result output layer; the result output layer is used for obtaining a processing result and an attention area according to the input first characteristic;
the method further comprises the steps of:
determining a histology feature of the image of the organ to be detected and a clinical feature of clinical information of the patient;
and inputting the group of chemical features, the clinical features and the first features into an evaluation model to obtain an evaluation result.
Optionally, the evaluation model comprises: a feature fusion network, a convolution network;
inputting the first feature, the set of mathematical features, and the clinical feature into the assessment model, comprising:
inputting the first feature, the group study feature and the clinical feature into the feature fusion network to obtain a fusion feature;
the fusion features are input into the convolutional network.
Optionally, the first model is trained by the following method:
acquiring a first sample human medical image;
determining an image of a first sample organ to be detected in the first sample human medical image;
determining a processing result of labeling the first sample human body medical image as a first label;
inputting an image of a first sample organ to be detected in the first sample human body medical image into a first model to be trained, and obtaining a processing result output by the first model to be trained and an attention area output by the first model to be trained;
and training the first model to be trained by taking the minimum difference between the processing results output by the first label and the first model to be trained as a target.
Optionally, the second model is trained using the following method:
acquiring a second sample human medical image;
determining an image of a second sample organ to be detected in the second sample human medical image; determining the true focus position in the second sample human medical image, and taking the true focus position as a second label;
inputting the second sample human medical image and the image of the organ to be detected of the second sample into a trained first model to obtain a second processing result and a second attention area;
And inputting the second sample human medical image, the image of the organ to be detected of the second sample and the second attention area into the second model, and training the second model by taking the minimum difference between the output result of the second model and the second label as an optimization target.
The present specification provides a detection system for a lesion, the system comprising: medical imaging equipment, computing equipment;
the medical imaging device is used for acquiring a human body medical image of a patient and transmitting the human body medical image to the computing device;
the computing device is used for receiving the human body medical image, determining an image of an organ to be detected in the human body medical image, inputting the image of the organ to be detected and the human body medical image into the first model to obtain a processing result and an attention area, inputting the human body medical image and the attention area into the second model based on the processing result, and determining the focus position in the human body medical image.
The present specification provides a detection apparatus for a lesion, comprising:
the acquisition module is used for acquiring a human medical image of a patient;
An organ determining module for determining an image of an organ to be detected in the medical image of the human body;
the input module is used for inputting the human medical image and the image of the organ to be detected into a first model to obtain a processing result and an attention area of the first model on the image of the organ to be detected; wherein the attention area is an area in the medical image of the human body according to which the first model determines the processing result;
and the focus determining module is used for inputting the human body medical image and the attention area into a second model based on the processing result to determine the focus position in the human body medical image.
Optionally, the organ determining module is specifically configured to input the human medical image into an organ segmentation model to obtain an image of an organ to be detected in the human medical image.
Optionally, the acquiring module is further configured to acquire clinical information of the patient;
the input module is specifically configured to input the clinical information, the medical image of the human body, and the image of the organ to be detected into a first model.
Optionally, the first model includes: the feature extraction layer and the result output layer; the feature extraction layer is used for extracting first features of the human medical image and the image of the organ to be detected, and inputting the first features into the result output layer; the result output layer is used for obtaining a processing result and an attention area according to the input first characteristic;
The apparatus further comprises: an evaluation module;
the evaluation module is specifically used for determining the histology characteristics of the image of the organ to be detected and the clinical characteristics of the clinical information of the patient; and inputting the group of chemical features, the clinical features and the first features into an evaluation model to obtain an evaluation result.
Optionally, the evaluation model comprises: a feature fusion network, a convolution network;
the evaluation module is specifically configured to input the first feature, the omics feature, and the clinical feature into the feature fusion network to obtain a fusion feature; the fusion features are input into the convolutional network.
Optionally, the apparatus further comprises: a model training module;
the model training module is specifically used for acquiring a first sample human body medical image; determining an image of a first sample organ to be detected in the first sample human medical image; determining a processing result of labeling the first sample human body medical image as a first label; inputting an image of a first sample organ to be detected in the first sample human body medical image into a first model to be trained, and obtaining a processing result output by the first model to be trained and an attention area output by the first model to be trained; and training the first model to be trained by taking the minimum difference between the processing results output by the first label and the first model to be trained as a target.
Optionally, the model training module is specifically configured to obtain a second sample human medical image; determining an image of a second sample organ to be detected in the second sample human medical image; determining the true focus position in the second sample human medical image, and taking the true focus position as a second label; inputting the second sample human medical image and the image of the organ to be detected of the second sample into a trained first model to obtain a second processing result and a second attention area; and inputting the second sample human medical image, the image of the organ to be detected of the second sample and the second attention area into the second model, and training the second model by taking the minimum difference between the output result of the second model and the second label as an optimization target.
The present specification provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the method of detecting lesions described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of detecting lesions described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the method for detecting a lesion provided in the present specification, it can be seen that a processing result of an image of an organ to be detected in a medical image of a human body is determined first, and an attention area in the medical image of the human body according to which the first model obtains the processing result is determined at the same time, so that a position of the lesion in the medical image of the human body is determined according to the processing result and the attention area. Because the first model is mainly based on the attention area in the human medical image when determining the processing result of the human medical image, the attention area provides the difference information of the organ to be detected of the patient and the normal person, in the method provided by the specification, when determining the focus position, the difference information of the organ to be detected of the patient and the normal person provided by the attention area is considered, namely the influence of the attention area on the focus is considered.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. Attached at
In the figure:
fig. 1 is a schematic flow chart of a method for detecting a lesion in the present specification;
fig. 2 is a schematic diagram of a focus detection apparatus provided in the present specification;
fig. 3 is a schematic view of the electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method for detecting a lesion provided in the present specification, which specifically includes the following steps:
s100: a medical image of a human body of a patient is acquired.
S102: an image of an organ to be detected in the medical image of the human body is determined.
In general, due to disease invasion, patients such as cancer patients may exhibit many direct and/or indirect characteristic manifestations at sites near the foci of cancer, which are different from the corresponding sites of normal people, such as: direct characteristic manifestations such as transverse membranes, peri-nerve infiltration, lymph node tiredness and the like, indirect characteristic manifestations such as catheter blockage, peripheral organ expansion and the like, which reflect the difference between a patient and a normal person at a part near a cancer focus, so that the symptoms of the patient can be determined according to the direct and/or indirect characteristic manifestations, and likewise, the direct and/or indirect characteristic manifestations of the symptoms of the patient can be referred to when determining the focus. The detection method provided by the specification can determine the focus of a patient based on the determined attention area in the human medical image according to which the disease of the patient is determined, wherein the attention area provides difference information of the disease part of the patient and/or the peripheral part of the disease part and the corresponding part of a normal person, in other words, the attention area is a certain area in the human medical image containing each direct and/or indirect characteristic expression of the disease of the patient, and the method is different from the method for detecting the focus only according to the disease part of the patient, and greatly improves the accuracy of the detected focus of the patient.
The execution body that executes the description of the present application may be any computing device that has computing capabilities, such as: server, terminal, etc.
In detecting a lesion location of a patient, the computing device may first acquire a medical image of the patient acquired by a medical imaging device such as an electronic computed tomography (Computed Tomography, CT), magnetic resonance imaging (Magnetic Resonance Imaging, MRI), or the like, and then determine an image of an organ to be detected in the medical image of the patient.
In one or more embodiments of the present disclosure, when determining an image of an organ to be detected in a human medical image, the human medical image may be input into an organ segmentation model to obtain an image of the organ to be detected in the human medical image output by the organ segmentation model, or an image mask method may be used to determine an organ mask of the organ to be detected in the human medical image, and the organ mask may be used as an image of the organ to be detected. Of course, other methods may be used, and the present specification is not particularly limited as long as the image of the organ to be detected in the human medical image of the patient can be determined.
For example: when a patient is suspected of being a pancreatic cancer patient, a CTA image of the patient may be acquired by non-invasive vascular imaging techniques and may be input into an organ segmentation model to obtain pancreatic organs segmented from the CTA image of the patient.
It should be noted that, the method for determining the organ to be detected is not specifically limited in this specification, and the organ to be detected is not limited, for example: the organ to be detected of the patient may be determined from clinical condition information of the patient.
S104: inputting the human medical image and the image of the organ to be detected into a first model to obtain a processing result and an attention area of the first model on the image of the organ to be detected; wherein the attention area is an area in the medical image of the human body from which the first model determines the processing result.
Furthermore, the computing device may input the human medical image of the patient and the image of the organ to be detected into the first model to obtain a processing result of the image of the organ to be detected input by the first model and an attention area in the human medical image. Specifically, in the process of processing the image of the organ to be detected by the first model, the first model not only determines the processing result according to the input characteristics of the image of the organ to be detected, but also determines the processing result according to the characteristics of the human medical image input into the first model, as in the above-mentioned steps S100-S102, there are some indirect characteristic expressions and/or direct characteristic expressions in the area near the organ to be detected, that is, when the first model determines the processing result, the first model determines a certain area in the human medical image according to the indirect characteristic expressions and/or the direct characteristic expressions, and the area is the attention area in the specification. The attention area is a certain area in the human medical image, so the attention area can be an area where the organ to be detected is located, can be an area including the organ to be detected and the area around the organ to be detected, can be an area around the organ to be detected, and the like. In the embodiment of the present description, the first model may include a first feature extraction layer for extracting first features of the human medical image and the image of the organ to be detected and inputting the first features into a result output layer for obtaining the processing result and the attention area according to the input first features.
In embodiments of the present description, the first model may be a cancer prediction model. Along the above example, to determine whether the patient is a pancreatic cancer patient, the CTA image of the patient and the pancreatic organ may be input into a cancer prediction model, and the cancer prediction model may output a prediction result, that is, the patient is a cancer patient or the patient is a non-cancer patient, and may output an attention area determined from the CTA image, which is an area in the CTA image on which the prediction result is mainly based when the cancer prediction model derives.
S106: and inputting the human medical image and the attention area into a second model based on the processing result, and determining the focus position in the human medical image.
Finally, the computing device may input the human medical image of the patient and the attention area into the second model based on the processing result of the image of the organ to be detected obtained from the first model, thereby determining the lesion position in the human medical image. In the process, the image of the organ to be detected in the human medical image, or the processing result of the image of the organ to be detected based on the first model limits the detection position of the cancer focus to the area where the organ to be detected is located or the area around the organ to be detected, and the attention area provides the difference information between the patient and the normal person caused by the organ to be detected, so that the accuracy and the sensitivity of detecting the cancer focus by the second model can be improved by inputting the attention area.
In one or more embodiments of the present disclosure, the second model may be a lesion predictive model, and taking the above example as an assumption that a CTA image of the patient and a pancreatic organ are input into the cancer predictive model, the determined prediction result is that the patient is a pancreatic cancer patient, and an attention area determined by the cancer predictive model from the CTA image is obtained. The attention area and the patient's biomedical image may be input to the lesion prediction model, which may then determine the lesion of the biomedical image.
In the method for detecting a lesion provided by the present specification and based on the method shown in fig. 1, a processing result of an image of an organ to be detected in a human medical image is determined first, and an attention area in the human medical image according to which the first model obtains the processing result is determined at the same time, so that a position of the lesion in the human medical image is determined according to the processing result and the attention area. Because the first model is mainly based on the attention area in the human body medical image when determining the processing result of the human body medical image, the attention area provides the difference information of organs of a patient and a normal person on the medical image, in the method provided by the specification, the focus of the patient is determined based on the determined attention area when determining the disease of the patient, and the specific attention area can provide the difference information of the disease part of the patient and/or the peripheral part of the disease part and the corresponding part of the normal person, namely, the attention area contains each direct and/or indirect characteristic expression of the disease of the patient, which is different from the method for detecting the focus only according to the disease part, so that the accuracy of the detected focus of the patient is greatly improved.
Furthermore, in order to evaluate the processing result of the image of the organ to be detected of the patient obtained according to the first model to determine the degree of the disorder or health of the patient, the computing device may further determine the evaluation result based on the attention area, the histology characteristics of the image of the organ to be detected and the clinical characteristics of the clinical information of the patient. The evaluation result is obtained by evaluating the processing result based on multi-dimensional information, and the multi-dimensional information at least comprises: attention area, image of the organ to be examined, and clinical information of the patient. Specifically, the computing device can determine clinical information of a patient, determine a histology feature of an image of an organ to be detected and clinical features of the clinical information of the patient, and input the first feature, the histology feature and the clinical features into the evaluation model so that the evaluation model evaluates a processing result output by the first model according to the features of the multidimensional information to obtain an evaluation result.
Wherein the evaluation model may comprise a feature fusion network, a convolution network. Specifically, when the computing device inputs the first feature, the histology feature and the clinical feature into the evaluation model, the fusion feature can be obtained through the feature fusion network, and then the fusion feature is input into the convolution network for processing, so that a final evaluation result is obtained. That is, the evaluation model obtains an evaluation result by processing the input multidimensional information. In this specification, the evaluation model may be a tumor stage evaluation model, and the evaluation result may be a TNM stage result.
The histology features include at least: intensity features, shape features, texture features, etc. Clinical information may be determined from the patient's clinician's inquiry information.
In addition, the specification also provides a training method of the first model and the second model.
The computing device may perform training of the first model, then first the computing device may acquire a first sample human medical image and determine an image of a first sample organ to be detected in the first sample human medical image and determine a result of processing of the labeling of the first sample human medical image as a first label. Then, the computing equipment can input an image of a first sample organ to be detected in the first sample human body medical image into a first model to be trained, and a processing result output by the first model to be trained and an attention area output by the first model to be trained are obtained. Finally, the computing device may train the first model to be trained with a goal of minimizing a difference between the first label and the processing result output by the first model to be trained.
When the first model is trained, the attention of the first model can be limited to the position of the organ to be detected and the periphery of the organ to be detected through a Loss function such as Cam Loss, so that the first model is enabled to mainly learn the difference between a normal person and a patient at the position of the organ to be detected and the region around the organ to be detected, and the stability and the accuracy of a processing result determined by the first model are improved.
The computing device may perform training of the second model, then first the computing device may acquire a second sample biomedical image, determine an image of a second sample organ to be detected in the second sample biomedical image, and determine a true lesion location in the second sample biomedical image, and take the true lesion location as a second label. The computing device may then input the second sample medical image of the human body and the image of the organ to be detected of the second sample into the trained first model to obtain a second processing result and a second attention area. And finally, the computing equipment can input the second sample human medical image, the image of the organ to be detected of the second sample and the second attention area into a second model, and train the second model by taking the minimum difference between the output result of the second model and the second label as an optimization target.
When the second model is trained, the detected position of the cancer focus is limited at the position of the organ to be detected or the surrounding area of the organ to be detected by the human body medical image, and the second model can detect the position of the cancer focus based on the attention area because the attention area output from the first model provides difference information between the position of the first detected organ of the patient and the normal person and the surrounding area of the organ to be detected, so that the accuracy and the sensitivity of the detected position of the cancer focus are improved.
Further, in one or more embodiments of the present disclosure, the models described in the foregoing steps are compatible with models commonly used in the medical image arts, such as U-Net, 3D U-Net, V-Net, resNet, denseNet, and the like. Also, the organ segmentation model in the above steps S100-S102 may use the loss function in training: the Dice and Boundary Loss, that is, the organ segmentation model is trained based on the Loss functions Dice and Boundary Loss. The combined use of the two loss functions can not only enhance the segmentation precision of the organ to be detected in the human body medical image, but also pay more attention to the information of the organ to be detected and the surrounding area of the cancer focus.
Still further, in one or more embodiments of the present description, the first model may employ a loss function: the Focal and CAM loss are trained, i.e., the first model is trained based on the loss functions Focal and CAM loss. In the first model, the attention of the network in the first model can be further limited to the surrounding area of the organ to be detected through CAM loss, so that the accuracy of the processing result determined by the first model is further enhanced. Along the above example, assuming that the patient is a pancreatic cancer patient, the patient's human medical image is a CTA image, specifically a CTA image with enhanced portal vein phase, and then the CTA image can be segmented into an image of the organ to be detected, that is, an image of the pancreatic organ, via an organ segmentation model. The CTA image and the image of the pancreatic organ may then be input into a first model, which may output the result of the treatment of whether the patient is a pancreatic cancer patient, and the attention area determined from the CTA image. In the process, the result output layer in the first model can be a network based on attention, and the attention of the output layer can be limited near the organ to be detected through Cam Loss, so that the first model is focused on learning the difference between a normal person and a pancreatic cancer patient at and near the pancreatic organ, and the stability and the accuracy of the determined processing result are improved. In the subsequent step, the cancer range position is detected based on the attention area, so that the accuracy of the detected cancer range position is enhanced.
Furthermore, in one or more embodiments of the present description, the evaluation model may also employ Focal loss as a loss function to achieve optimization of the evaluation results.
Based on the foregoing disclosure, the embodiment of the present disclosure further provides a focus detection system, which includes: medical imaging equipment, computing equipment;
the medical imaging device is used for acquiring a human body medical image of a patient and transmitting the human body medical image to the computing device.
The computing device is used for receiving the human body medical image, determining an image of an organ to be detected in the human body medical image, inputting the image of the organ to be detected and the human body medical image into the first model to obtain a processing result and an attention area, inputting the human body medical image and the attention area into the second model based on the processing result, and determining the focus position in the human body medical image.
Based on the above-mentioned method for detecting a lesion, the embodiment of the present disclosure further provides a schematic diagram of a detection device for a lesion, as shown in fig. 2.
Fig. 2 is a schematic diagram of a detection apparatus for a lesion according to an embodiment of the present disclosure, where the apparatus includes:
An acquisition module 200 for acquiring a human medical image of a patient;
an organ determination module 202 for determining an image of an organ to be detected in the medical image of the human body;
the input module 204 is configured to input the human medical image and the image of the organ to be detected into a first model, and obtain a processing result and an attention area of the first model on the image of the organ to be detected; wherein the attention area is an area in the medical image of the human body according to which the first model determines the processing result;
a focus determining module 206, configured to input the human medical image and the attention area into a second model based on the processing result, and determine a focus position in the human medical image.
Optionally, the organ determining module 202 is specifically configured to input the biomedical image into an organ segmentation model, and obtain an image of an organ to be detected in the biomedical image.
Optionally, the obtaining module 200 is further configured to obtain clinical information of the patient;
the input module 204 is specifically configured to input the clinical information, the medical image of the human body, and the image of the organ to be detected into a first model.
Optionally, the first model includes: the feature extraction layer and the result output layer; the feature extraction layer is used for extracting first features of the human medical image and the image of the organ to be detected, and inputting the first features into the result output layer; the result output layer is used for obtaining a processing result and an attention area according to the input first characteristic;
the apparatus further comprises: an evaluation module 208;
the evaluation module 208 is specifically configured to determine a histology feature of the image of the organ to be detected and a clinical feature of clinical information of the patient; and inputting the group of chemical features, the clinical features and the first features into an evaluation model to obtain an evaluation result.
Optionally, the evaluation model comprises: a feature fusion network, a convolution network;
the evaluation module 208 is specifically configured to input the first feature, the omics feature, and the clinical feature into the feature fusion network to obtain a fusion feature; the fusion features are input into the convolutional network.
Optionally, the apparatus further comprises: a model training module 210;
the model training module 210 is specifically configured to obtain a first sample human body medical image; determining an image of a first sample organ to be detected in the first sample human medical image; determining a processing result of labeling the first sample human body medical image as a first label; inputting an image of a first sample organ to be detected in the first sample human body medical image into a first model to be trained, and obtaining a processing result output by the first model to be trained and an attention area output by the first model to be trained; and training the first model to be trained by taking the minimum difference between the processing results output by the first label and the first model to be trained as a target.
Optionally, the model training module 210 is specifically configured to obtain a second sample human medical image; determining an image of a second sample organ to be detected in the second sample human medical image; determining the true focus position in the second sample human medical image, and taking the true focus position as a second label; inputting the second sample human medical image and the image of the organ to be detected of the second sample into a trained first model to obtain a second processing result and a second attention area; and inputting the second sample human medical image, the image of the organ to be detected of the second sample and the second attention area into the second model, and training the second model by taking the minimum difference between the output result of the second model and the second label as an optimization target.
The embodiments of the present specification also provide a computer-readable storage medium storing a computer program, where the computer program is configured to execute the method for detecting a lesion described above.
Based on the above-mentioned method for detecting a lesion, the embodiment of the present disclosure further provides a schematic block diagram of the electronic device shown in fig. 3. At the hardware level, as in fig. 3, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, although it may include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the focus detection method.
Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method of detecting a lesion, the method comprising:
acquiring a human body medical image of a patient;
determining an image of an organ to be detected in the human medical image;
inputting the human medical image and the image of the organ to be detected into a first model to obtain a processing result and an attention area of the first model on the image of the organ to be detected; wherein the attention area is an area in the medical image of the human body according to which the first model determines the processing result;
And inputting the human medical image and the attention area into a second model based on the processing result, and determining the focus position in the human medical image.
2. The method according to claim 1, wherein determining an image of an organ to be detected in the medical image of the human body, in particular comprises:
inputting the human body medical image into an organ segmentation model to obtain an image of an organ to be detected in the human body medical image.
3. The method of claim 1, wherein prior to inputting the medical image of the human body and the image of the organ to be detected into the first model, the method further comprises:
acquiring clinical information of the patient;
inputting the human medical image and the image of the organ to be detected into a first model, wherein the method specifically comprises the following steps:
inputting the clinical information, the human medical image and the image of the organ to be detected into a first model.
4. The method of claim 1, wherein the first model comprises: the feature extraction layer and the result output layer; the feature extraction layer is used for extracting first features of the human medical image and the image of the organ to be detected, and inputting the first features into the result output layer; the result output layer is used for obtaining a processing result and an attention area according to the input first characteristic;
The method further comprises the steps of:
determining a histology feature of the image of the organ to be detected and a clinical feature of clinical information of the patient;
and inputting the group of chemical features, the clinical features and the first features into an evaluation model to obtain an evaluation result.
5. The method of claim 4, wherein the evaluation model comprises: a feature fusion network, a convolution network;
inputting the first feature, the set of mathematical features, and the clinical feature into the assessment model, comprising:
inputting the first feature, the group study feature and the clinical feature into the feature fusion network to obtain a fusion feature;
the fusion features are input into the convolutional network.
6. The method of claim 1, wherein the first model is trained using the following method:
acquiring a first sample human medical image;
determining an image of a first sample organ to be detected in the first sample human medical image;
determining a processing result of labeling the first sample human body medical image as a first label;
inputting an image of a first sample organ to be detected in the first sample human body medical image into a first model to be trained, and obtaining a processing result output by the first model to be trained and an attention area output by the first model to be trained;
And training the first model to be trained by taking the minimum difference between the processing results output by the first label and the first model to be trained as a target.
7. The method of claim 6, wherein the second model is trained using the following method:
acquiring a second sample human medical image;
determining an image of a second sample organ to be detected in the second sample human medical image; determining the true focus position in the second sample human medical image, and taking the true focus position as a second label;
inputting the second sample human medical image and the image of the organ to be detected of the second sample into a trained first model to obtain a second processing result and a second attention area;
and inputting the second sample human medical image, the image of the organ to be detected of the second sample and the second attention area into the second model, and training the second model by taking the minimum difference between the output result of the second model and the second label as an optimization target.
8. A system for detecting a lesion, the system comprising: medical imaging equipment, computing equipment;
The medical imaging device is used for acquiring a human body medical image of a patient and transmitting the human body medical image to the computing device;
the computing device is used for receiving the human body medical image, determining an image of an organ to be detected in the human body medical image, inputting the image of the organ to be detected and the human body medical image into a first model to obtain a processing result and an attention area, inputting the human body medical image and the attention area into a second model based on the processing result, and determining the focus position in the human body medical image.
9. A computer readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the preceding claims 1-7 when the program is executed.
CN202310922203.5A 2023-07-25 2023-07-25 Focus detection method, focus detection device, storage medium and electronic equipment Pending CN116912224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310922203.5A CN116912224A (en) 2023-07-25 2023-07-25 Focus detection method, focus detection device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310922203.5A CN116912224A (en) 2023-07-25 2023-07-25 Focus detection method, focus detection device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116912224A true CN116912224A (en) 2023-10-20

Family

ID=88364521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310922203.5A Pending CN116912224A (en) 2023-07-25 2023-07-25 Focus detection method, focus detection device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116912224A (en)

Similar Documents

Publication Publication Date Title
CN109448004B (en) Centerline-based intracranial blood vessel image interception method and system
CN111127428A (en) Method and system for extracting target region based on brain image data
CN111081378B (en) Aneurysm rupture risk assessment method and system
CN117333529B (en) Template matching-based vascular ultrasonic intima automatic measurement method and system
CN116843994A (en) Model training method and device, storage medium and electronic equipment
CN116030247B (en) Medical image sample generation method and device, storage medium and electronic equipment
CN111223089B (en) Aneurysm detection method and device and computer readable storage medium
CN116524295A (en) Image processing method, device, equipment and readable storage medium
CN116912224A (en) Focus detection method, focus detection device, storage medium and electronic equipment
CN116258679A (en) Information recommendation method and device, storage medium and electronic equipment
CN115546094A (en) Model training method, and CT image optimization method and device
CN114998582A (en) Coronary artery blood vessel segmentation method, device and storage medium
CN116229218B (en) Model training and image registration method and device
CN116152246B (en) Image recognition method, device, equipment and storage medium
CN116188469A (en) Focus detection method, focus detection device, readable storage medium and electronic equipment
CN113379770A (en) Nasopharyngeal carcinoma MR image segmentation network construction method, image segmentation method and device
CN117036830B (en) Tumor classification model training method and device, storage medium and electronic equipment
CN117357132B (en) Task execution method and device based on multi-layer brain network node participation coefficient
CN116864064A (en) Medical image report generation method, device and storage medium
CN116312981A (en) Image processing method, device and readable storage medium
CN116152600A (en) Model training method, device, equipment and readable storage medium
CN116344058B (en) Alzheimer's risk labeling method and device based on graph signals
CN117457186A (en) Prediction method and prognosis method
CN116580199A (en) DeepLabV3+ based image segmentation method, device and storage medium
CN117252831A (en) Focus transfer prediction method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination