CN113781597A - Lung CT image focus identification method, equipment and medium - Google Patents

Lung CT image focus identification method, equipment and medium Download PDF

Info

Publication number
CN113781597A
CN113781597A CN202111133615.8A CN202111133615A CN113781597A CN 113781597 A CN113781597 A CN 113781597A CN 202111133615 A CN202111133615 A CN 202111133615A CN 113781597 A CN113781597 A CN 113781597A
Authority
CN
China
Prior art keywords
lung
image
focus
lesion
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111133615.8A
Other languages
Chinese (zh)
Other versions
CN113781597B (en
Inventor
高岩
蔡明佳
尹青山
高明
王建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Original Assignee
Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong New Generation Information Industry Technology Research Institute Co Ltd filed Critical Shandong New Generation Information Industry Technology Research Institute Co Ltd
Priority to CN202111133615.8A priority Critical patent/CN113781597B/en
Publication of CN113781597A publication Critical patent/CN113781597A/en
Application granted granted Critical
Publication of CN113781597B publication Critical patent/CN113781597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the specification discloses a lesion identification method, device and medium of lung CT images, and aims to solve the problem that a doctor consumes a large amount of time in the existing lung lesion position identification process. The method comprises the following steps: acquiring a lung CT image read by CT equipment; taking the lung CT image read by the CT equipment as a lung CT image to be detected; inputting the lung CT image to be detected into a pre-trained focus identification model to obtain an output result; if the lung CT image to be detected has a focus, the output result is the focus position data; and marking the region of the focus on the lung CT image to be detected according to the focus position data.

Description

Lung CT image focus identification method, equipment and medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a medium for identifying a lesion in a lung CT image.
Background
In the modern equipment with developed technology and ancient society with backward condition, the disease threatens the health and life of human beings all the time, while the lung disease has the greatest harm to the health of human beings, and the incidence and mortality of lung cancer are the first among all malignant tumors. With the development of medical imaging technology, imaging plays an important role in disease diagnosis, and doctors can not only quickly identify many diseases but also select precise treatment methods through medical images. Meanwhile, the treatment effect on the patient can be visually and conveniently observed, so that the treatment scheme can be timely adjusted, and the accurate and personalized treatment on the patient can be realized. For the examination of lung lesions, a Computer Tomography (CT) technique is generally adopted to obtain medical images of lung, and CT images are obtained by performing Tomography with a certain thickness on a part of a human body to be examined by using an X-ray beam.
In the prior art, the lung focuses have complex structures, and the volumes of some focuses are small and difficult to identify, so that a doctor may bring certain difficulty in identifying the focuses in a lung CT image, and the consumed time and cost are high.
Therefore, a method for assisting in identifying the location of a lesion in a lung CT image is needed.
Disclosure of Invention
One or more embodiments of the present disclosure provide a lesion identification method for lung CT images, which is used to solve the following technical problems: how to provide a method for assisting in identifying the position of a lesion in a lung CT image.
One or more embodiments of the present disclosure adopt the following technical solutions:
one or more embodiments of the present disclosure provide a lesion identification method for a lung CT image, including:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
Optionally, in one or more embodiments of the present specification, before the inputting the lung CT image to be detected into a pre-trained lesion recognition model, the method further includes:
collecting a lung CT image with a disease and a normal lung CT image as training images;
filtering the training image based on a preset processing mode to obtain a training sample of the focus identification model;
inputting training samples corresponding to the lung CT image with the disease and the normal lung CT image into a convolutional neural network in the same proportion; taking the focus position data of the training sample as output and input into a convolutional neural network model, and training the convolutional neural network;
and if the recognition result output by the trained convolutional neural network meets the preset accuracy, taking the trained convolutional neural network as a focus recognition model.
Optionally, in one or more embodiments of the present specification, after proportionally inputting the training samples corresponding to the diseased lung CT image and the normal lung CT image into the convolutional neural network, the method further includes:
performing sliding convolution on the training sample based on a convolution kernel of the convolutional neural network to extract convolution characteristics of the training sample;
clustering the convolution features through a pooling layer of the convolution neural network and filtering redundant features to obtain dimension reduction convolution features of the training samples;
and inputting the dimension reduction convolution characteristics into a preset classifier, and identifying the focus of the training sample to obtain a focus identification result.
Optionally, in one or more embodiments of the present specification, the filtering the training image based on a preset processing manner to obtain a training sample of the lesion recognition model specifically includes:
calculating the area of each connected region of the training image, and extracting the lung region of the training image according to a preset threshold value of the area of the connected region to obtain a first lung image;
processing the first lung image according to a preset CT value range so as to filter edge non-lung tissues of the first lung image to obtain a second lung image;
adding CT noise to a second lung image to obtain a training sample of the lung CT image; wherein the training samples are stored in gray scale with a preset resolution.
Optionally, in one or more embodiments of the present specification, the inputting the lesion position data of the training sample as an output into a convolutional neural network model, and the training of the convolutional neural network specifically includes:
determining the position coordinates of at least one pair of diagonals of the lesion; wherein the position coordinates of the diagonal include: the upper left corner position coordinate and the lower right corner position coordinate, and the upper right corner position coordinate and the lower left corner position coordinate;
and storing the position coordinates of the diagonal line in an integer array format as the focus position data of the training sample.
Optionally, in one or more embodiments of the present specification, the labeling the lesion region on the lung CT image to be detected according to the lesion position data specifically includes:
carrying out transverse extension and longitudinal extension on points corresponding to the position coordinates of the diagonal lines in a ray form to obtain intersection points of extension lines; wherein the ray is parallel to a side length of the lung CT image;
and connecting the points corresponding to the position coordinates of the intersection points and the diagonal lines by using a marking line to obtain a lesion region marked by the lung image.
Optionally, in one or more embodiments of the present specification, after the labeling the region where the lesion is located on the lung CT image to be detected according to the lesion position data, the method further includes:
acquiring an image sequence of the lesion region of the lung CT image according to the labeling region of the lung CT image;
determining the volume of the focus region in any layer of lung CT image according to focus pixel points, pixel intervals and layer intervals of the focus in any layer of lung CT image of the image sequence;
superposing to obtain the total lesion volume in the lung CT image according to the volume of each layer of lesion area in the image sequence;
and acquiring the total volume of the lung lobes in the lung CT image, and acquiring the volume ratio of the focus in the lung according to the total volume of the focus and the volume of the lung lobes in the lung CT image.
Optionally, in one or more embodiments of the present specification, after obtaining the volume fraction of the lesion in the lung, the method further comprises:
acquiring a historical case of a user corresponding to the lung CT image;
matching a preset report template according to the position information of the focus;
inputting the position information of the focus, the total volume of the focus, the volume proportion of the focus in the lung and the related data of the historical case into the report template;
and processing according to the report template to obtain a lesion identification report corresponding to the lung CT image.
One or more embodiments of the present specification provide a lesion identification apparatus for a lung CT image, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
One or more embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions configured to:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the lung CT image is subjected to feature extraction and dimension reduction through the convolutional neural network model, and the over-fitting phenomenon caused by feature redundancy is avoided. During the training process of the convolutional neural network, the lung CT image with the disease and the normal CT image are input in the same proportion as the training image for detection, so that the detection accuracy of the training model detection is improved, and the error of the model identification result is reduced. The focus area is marked on the lung CT image through focus position data, so that a doctor can be assisted to obtain the position of the focus, and the pressure of the doctor is relieved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
fig. 1 is a schematic flowchart of a lesion identification method for lung CT images according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating an internal structure of a lesion identification apparatus for a lung CT image according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an internal structure of a nonvolatile storage medium according to an embodiment of the present disclosure.
Detailed Description
The embodiment of the specification provides a lesion identification method, a device and a medium for lung CT images.
Medical image examination methods are various, but detection by CT combined with lung characteristics can better detect the lesion information of patients. Compared with the common X-ray, the cross-plane image obtained by CT is clearer and more accurate, the image definition is higher, and more detail information is obtained.
In recent years, the incidence and death rate of lung cancer rapidly increase, and the number of patients is expanded, so that more workload is brought to doctors. Meanwhile, the details of the focal region in the CT digital chest radiograph are difficult to clearly show, and the accurate positioning of the lung focal region is greatly influenced by the interference of factors such as blood vessels, muscles, vertebra bones and the like. In this case, when the doctor judges the lesion position in the lung, it may take a long time to identify the position of a fine lesion, so that the doctor's stress increases.
In order to solve the above problems, the present specification proposes a lesion identification method for a lung CT image. The training of the lesion identification model is completed by establishing the training samples of the normal lung CT image and the partial CT image of the lung with the disease, the training samples are extracted in the same proportion mode to train the identification result, and the problem that the identification accuracy cannot be checked by a single type of training sample is avoided. After the detection of the position of the focus is finished by the specially-generated part extracted by the recognition model, the region where the focus is located is marked based on the focus position data so as to assist a doctor to find out the position of the focus in the lung CT image, thereby saving the time spent by the doctor in finding out the position of the focus and relieving the pressure of the doctor.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present specification without any creative effort shall fall within the protection scope of the present specification.
One or more embodiments of the present disclosure provide a flow chart of a lesion identification method for lung CT images, as shown in fig. 1.
The process in fig. 1 comprises the following steps:
s101: and acquiring a lung CT image read by CT equipment.
The computerized X-ray tomography equipment is one instrument for tomographic scanning of some thickness layer of human body with X-ray, converting the analog signal received by the detector into digital signal, calculating the attenuation coefficient of each pixel with the computer and reconstructing image to obtain the tomographic structure of each part of human body. Whereas a lung CT image is an image obtained by computer tomography of the breast structure of the human body.
S102: and taking the lung CT image read by the CT equipment as a lung CT image to be detected.
And (4) scanning the chest part of the patient by the CT equipment in the step (S101), taking the read lung CT picture as a lung CT image to be detected, and waiting for recognizing the position of the lung focus to avoid the problem that a doctor wastes a long time for recognizing the position of the focus which is difficult to distinguish.
S103: and inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result.
In one or more embodiments of the present specification, before the inputting the lung CT image to be detected into a pre-trained lesion recognition model, the method further includes:
collecting a lung CT image with a disease and a normal lung CT image as training images;
filtering the training image based on a preset processing mode to obtain a training sample of the focus identification model;
inputting training samples corresponding to the lung CT image with the disease and the normal lung CT image into a convolutional neural network in the same proportion; taking the focus position data of the training sample as output and input into a convolutional neural network model, and training the convolutional neural network;
and if the recognition result output by the trained convolutional neural network meets the preset accuracy, taking the trained convolutional neural network as a focus recognition model.
In one or more embodiments of the present specification, after proportionally inputting the training samples corresponding to the diseased pulmonary CT image and the normal pulmonary CT image into the convolutional neural network, the method further includes:
performing sliding convolution on the training sample based on a convolution kernel of the convolutional neural network to extract convolution characteristics of the training sample;
clustering the convolution features through a pooling layer of the convolution neural network and filtering redundant features to obtain dimension reduction convolution features of the training samples;
and inputting the dimension reduction convolution characteristics into a preset classifier, and identifying the focus of the training sample to obtain a focus identification result.
In one or more embodiments of the present specification, the filtering the training image based on a preset processing manner to obtain a training sample of the lesion recognition model specifically includes:
calculating the area of each connected region of the training image, and extracting the lung region of the training image according to a preset threshold value of the area of the connected region to obtain a first lung image;
processing the first lung image according to a preset CT value range so as to filter edge non-lung tissues of the first lung image to obtain a second lung image;
adding CT noise to a second lung image to obtain a training sample of the lung CT image; wherein the training samples are stored in gray scale with a preset resolution.
In one or more embodiments of the present specification, the inputting the lesion position data of the training sample as an output into a convolutional neural network model, and the training of the convolutional neural network specifically includes:
determining the position coordinates of at least one pair of diagonals of the lesion; wherein the position coordinates of the diagonal include: the upper left corner position coordinate and the lower right corner position coordinate, and the upper right corner position coordinate and the lower left corner position coordinate;
and storing the position coordinates of the diagonal line in an integer array format as the focus position data of the training sample.
And collecting the lung CT image with the disease and the normal lung CT image as training images based on the database of the Internet of things or a hospital. Model training is carried out based on the lung CT image with the disease and the normal lung CT image, so that the trained focus identification model can respectively identify the to-be-detected lung CT image with a focus and the to-be-detected lung CT image without the focus. In order to ensure the accuracy of the lesion identification model, not less than 50 lung CT images of each typical lung disease and not less than 500 lung CT images with diseases are required.
Before inputting the training image into the convolutional neural network model, the lung CT image to be detected obtained by the CT equipment contains interference factors such as muscles and vertebra. Therefore, the interference factors need to be filtered by a preset processing method to obtain a training sample of the lesion recognition model. Specifically, the method comprises the following steps:
the area of the connected domain of the lung area is different from the area of the connected domain of the bones and muscles. After the area of each connected domain in the training image is calculated, tissues which do not belong to the range of the lung connected domain are filtered through the preset area range of the connected domain, and the crude extraction of the lung region can be realized. For example: setting an area threshold of the connected domain to be greater than 30mm2Then the remaining area will be larger than 30mm2As a first lung image. In the first lung image obtained after coarse filtering of the connected component, different tissues have different absorption coefficients for X-rays, and can be represented by CT values. Therefore, after the edge non-lung tissue of the first lung image is filtered based on the preset CT value range of the lung tissue, a second filtered lung image can be obtained.
In addition, noise interference generally exists in the lung CT image acquired by the equipment, in order to make the training sample approach to the lung CT image to be detected acquired by the equipment, common CT noise such as common channel plate noise and random noise is added to the second lung image as interference, and the second lung image after the CT noise is added is used as the training sample of the lung CT image. It should be noted that the input layer image resolution of the convolutional neural network model can be set to be a grayscale image of 128 × 128 size, so that all images of the training sample are saved in a grayscale image format.
In the lung CT image with diseases in the training sample, the position of a focus is determined according to the previous diagnosis result of a doctor, and at least one pair of diagonal position coordinates of the focus is stored in an integer array format and used as focus position data of the training sample so as to mark the focus position of the lung CT image to be identified. It should be noted that the position coordinates of the diagonal line may be: a pair of diagonal position coordinates formed by the upper left corner position coordinate and the lower right corner position coordinate of the lesion, or a pair of diagonal position coordinates formed by the upper right corner position coordinate and the lower left corner position coordinate of the lesion.
And (3) inputting a lung CT image with a disease in the preprocessed training samples and a training sample corresponding to a normal lung CT image into the convolutional neural network as input in a ratio of 1:1, and inputting focus position data corresponding to the training samples as output into the convolutional neural network model for training. Therefore, the convolutional neural network model can accurately identify the basic lung CT image and the normal lung CT image, and the identification accuracy of the convolutional neural network in the training process is ensured.
The convolutional neural network is a feedforward neural network, and the convolution kernel of the convolutional neural network performs a sliding convolution operation on an input image by using shared weights to extract the convolution characteristics of a training sample. For example: and selecting convolution kernels with the size of M multiplied by N to slide on an image matrix of the training sample to obtain convolution characteristics of the training sample, wherein the size of the convolution kernels can be set differently according to the efficiency and the precision of network training. The pooling layer in the convolutional neural network performs aggregation statistics on the features extracted by the convolutional layer and filters redundant features to reduce the data volume in the calculation process and effectively avoid the phenomenon of overfitting. After the convolutional neural network model is used for carrying out feature extraction and dimension reduction on the lung CT image, the extracted features are used for training an SVM (support vector machine) classifier so as to identify the focus in the training sample, and the identification result is output so as to achieve the purpose of detecting the position of the focus. And if the recognition result output by the trained training model meets the preset recognition accuracy, taking the trained convolutional neural network meeting the requirements as a focus recognition model to automatically recognize the lung CT image focus.
S104: and if the lung CT image to be detected has a focus, the output result is the focus position data.
And inputting the lung CT image to be detected into a focus identification model, and outputting a null result if no focus exists in the lung CT image to be detected, without performing subsequent labeling work. And if the lung CT image to be detected has a focus, outputting the position data of the focus as an output result.
S105: and marking the region of the focus on the lung CT image to be detected according to the focus position data.
In one or more embodiments of the present disclosure, the labeling the lesion region on the lung CT image to be detected according to the lesion position data specifically includes:
carrying out transverse extension and longitudinal extension on points corresponding to the position coordinates of the diagonal lines in a ray form to obtain intersection points of extension lines; wherein the ray is parallel to a side length of the lung CT image;
and connecting the points corresponding to the position coordinates of the intersection points and the diagonal lines by using a marking line to obtain a lesion region marked by the lung image.
In one or more embodiments of the present disclosure, after the labeling the region where the lesion is located on the lung CT image to be detected according to the lesion position data, the method further includes:
acquiring an image sequence of the lesion region of the lung CT image according to the labeling region of the lung CT image;
determining the volume of the focus region in any layer of lung CT image according to focus pixel points, pixel intervals and layer intervals of the focus in any layer of lung CT image of the image sequence;
superposing to obtain the total lesion volume in the lung CT image according to the volume of each layer of lesion area in the image sequence;
and acquiring the total volume of the lung lobes in the lung CT image, and acquiring the volume ratio of the focus in the lung according to the total volume of the focus and the volume of the lung lobes in the lung CT image.
In one or more embodiments of the present disclosure, after obtaining the volume fraction of the lesion in the lung, the method further comprises:
acquiring a historical case of a user corresponding to the lung CT image;
matching a preset report template according to the position information of the focus;
inputting the position information of the focus, the total volume of the focus, the volume proportion of the focus in the lung and the related data of the historical case into the report template;
and processing according to the report template to obtain a lesion identification report corresponding to the lung CT image.
According to the lesion position data obtained in step S104, the image labeling tool is used to label the lesion position data output by the lesion identification model in the lung CT image to be detected, and the position of the region where the lesion is located in the lung CT image to be detected is obtained, so as to assist a doctor in identifying the lesion position.
Specifically, the labeling process is as follows:
and performing transverse extension parallel to the lung CT image to be detected and longitudinal extension parallel to the lung CT image to be detected by using a ray form on a point corresponding to focus position data obtained by the focus identification model and a point corresponding to a diagonal position of the focus to obtain an extension line intersection point after the point corresponding to the diagonal position is extended. And connecting the obtained intersection points and points corresponding to the lesion position data by using marking lines, wherein the obtained rectangular region is a lesion region marked by the lung image. By identifying the focus in the lung CT image and marking the focus area, doctors can be assisted in identifying the area position of the focus in the lung CT image in daily work of hospitals.
In addition, in order to further reduce the judgment task of the doctor, after the labeling of the lesion region is obtained, an image sequence of the lesion region of the lung CT image can be obtained according to the lesion labeling region. Because a focus region may span multiple layers of lung CT images, the focus region in any layer of lung CT image in a lung CT image sequence can be regarded as a cylinder, the total number of pixels contained in the focus can be determined based on focus pixel points of the focus region in each layer of lung CT image, the average density of the focus region can be determined based on pixel point density values determined by pixel intervals, the area and the layer interval corresponding to the focus region in each layer of lung CT image are determined, the volume of each layer of the focus region is obtained by calculation, and the total volume of the focus in the lung CT image can be obtained by superposition. And obtaining the total volume of the lung lobes in the lung CT image according to the lung CT image acquired by the equipment, and obtaining the total volume of a focus and the total volume of the lung lobes, wherein the focus accounts for the ratio of the volume of the lung of the patient.
According to the user historical case corresponding to the lung CT image, past examination data of the patient can be obtained. According to the lesion position information of the patient, a preset report template can be matched. According to the matched report template, analyzing the position information of the focus obtained based on the focus identification model, the total volume of the focus, the volume ratio of the focus in the lung and related data in historical cases, and obtaining a focus identification report corresponding to the lung CT image of the patient. The method helps doctors to quickly acquire the development condition of the lung lesion of the patient after the lesion area is identified, and saves time spent by the doctors when the doctors call historical cases for comparative analysis.
As shown in fig. 2, in one or more embodiments, the present disclosure provides a lesion identification apparatus for lung CT images, the apparatus including:
at least one processor 201; and the number of the first and second groups,
a memory 202 communicatively coupled to the at least one processor 201; wherein,
the memory 202 stores instructions executable by the at least one processor 201 to enable the at least one processor 201 to:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
As shown in fig. 3, one or more implementations of the present description provide a non-volatile storage medium storing computer-executable instructions 301, the computer-executable instructions comprising:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method for lesion identification in a pulmonary CT image, the method comprising:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
2. The method of claim 1, wherein before the lung CT image to be detected is input into a pre-trained lesion recognition model, the method further comprises:
collecting a lung CT image with a disease and a normal lung CT image as training images;
filtering the training image based on a preset processing mode to obtain a training sample of the focus identification model;
inputting training samples corresponding to the lung CT image with the disease and the normal lung CT image into a convolutional neural network in the same proportion; taking the focus position data of the training sample as output and input into a convolutional neural network model, and training the convolutional neural network;
and if the recognition result output by the trained convolutional neural network meets the preset accuracy, taking the trained convolutional neural network as a focus recognition model.
3. The method for identifying lesion in pulmonary CT image according to claim 2, wherein after proportionally inputting the training samples corresponding to the diseased pulmonary CT image and the normal pulmonary CT image into the convolutional neural network, the method further comprises:
performing sliding convolution on the training sample based on a convolution kernel of the convolutional neural network to extract convolution characteristics of the training sample;
clustering the convolution features through a pooling layer of the convolution neural network and filtering redundant features to obtain dimension reduction convolution features of the training samples;
and inputting the dimension reduction convolution characteristics into a preset classifier, and identifying the focus of the training sample to obtain a focus identification result.
4. The method for identifying a lesion of a pulmonary CT image according to claim 2, wherein the filtering the training image based on a preset processing manner to obtain a training sample of the lesion identification model specifically comprises:
calculating the area of each connected region of the training image, and extracting the lung region of the training image according to a preset threshold value of the area of the connected region to obtain a first lung image;
processing the first lung image according to a preset CT value range so as to filter edge non-lung tissues of the first lung image to obtain a second lung image;
adding CT noise to a second lung image to obtain a training sample of the lung CT image; wherein the training samples are stored in gray scale with a preset resolution.
5. The method of claim 2, wherein the step of inputting the lesion position data of the training sample as an output to a convolutional neural network model, the step of training the convolutional neural network specifically comprises:
determining the position coordinates of at least one pair of diagonals of the lesion; wherein the position coordinates of the diagonal include: the upper left corner position coordinate and the lower right corner position coordinate, and the upper right corner position coordinate and the lower left corner position coordinate;
and storing the position coordinates of the diagonal line in an integer array format as the focus position data of the training sample.
6. The method for identifying a lesion of a lung CT image according to claim 5, wherein the labeling the lesion region on the lung CT image to be detected according to the lesion position data specifically comprises:
carrying out transverse extension and longitudinal extension on points corresponding to the position coordinates of the diagonal lines in a ray form to obtain intersection points of extension lines; wherein the ray is parallel to a side length of the lung CT image;
and connecting the points corresponding to the position coordinates of the intersection points and the diagonal lines by using a marking line to obtain a lesion region marked by the lung image.
7. The method for identifying lesion in lung CT image according to claim 1, wherein after labeling the region where the lesion is located on the lung CT image to be detected according to the lesion position data, the method further comprises:
acquiring an image sequence of the lesion region of the lung CT image according to the labeling region of the lung CT image;
determining the volume of the focus region in any layer of lung CT image according to focus pixel points, pixel intervals and layer intervals of the focus in any layer of lung CT image of the image sequence;
superposing to obtain the total lesion volume in the lung CT image according to the volume of each layer of lesion area in the image sequence;
and acquiring the total volume of the lung lobes in the lung CT image, and acquiring the volume ratio of the focus in the lung according to the total volume of the focus and the volume of the lung lobes in the lung CT image.
8. The method of claim 7, wherein after obtaining the volume fraction of the lesion in the lung, the method further comprises:
acquiring a historical case of a user corresponding to the lung CT image;
matching a preset report template according to the position information of the focus;
inputting the position information of the focus, the total volume of the focus, the volume proportion of the focus in the lung and the related data of the historical case into the report template;
and processing according to the report template to obtain a lesion identification report corresponding to the lung CT image.
9. A lesion recognition apparatus for lung CT images, the apparatus comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
10. A non-volatile storage medium storing computer-executable instructions, the computer-executable instructions comprising:
acquiring a lung CT image read by CT equipment;
taking the lung CT image read by the CT equipment as a lung CT image to be detected;
inputting the lung CT image to be detected into a pre-trained focus recognition model to obtain an output result;
if the lung CT image to be detected has a focus, the output result is focus position data of the lung CT image to be detected;
and marking the region of the focus on the lung CT image to be detected according to the focus position data.
CN202111133615.8A 2021-09-27 2021-09-27 Focus identification method, equipment and medium for lung CT image Active CN113781597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111133615.8A CN113781597B (en) 2021-09-27 2021-09-27 Focus identification method, equipment and medium for lung CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111133615.8A CN113781597B (en) 2021-09-27 2021-09-27 Focus identification method, equipment and medium for lung CT image

Publications (2)

Publication Number Publication Date
CN113781597A true CN113781597A (en) 2021-12-10
CN113781597B CN113781597B (en) 2024-02-09

Family

ID=78853696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111133615.8A Active CN113781597B (en) 2021-09-27 2021-09-27 Focus identification method, equipment and medium for lung CT image

Country Status (1)

Country Link
CN (1) CN113781597B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644225A (en) * 2017-10-31 2018-01-30 北京青燕祥云科技有限公司 Pulmonary lesionses recognition methods, device and realization device
CN110956626A (en) * 2019-12-09 2020-04-03 北京推想科技有限公司 Image-based prognosis evaluation method and device
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium
CN112132801A (en) * 2020-09-18 2020-12-25 上海市肺科医院 Lung bullae focus detection method and system based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644225A (en) * 2017-10-31 2018-01-30 北京青燕祥云科技有限公司 Pulmonary lesionses recognition methods, device and realization device
WO2020215557A1 (en) * 2019-04-24 2020-10-29 平安科技(深圳)有限公司 Medical image interpretation method and apparatus, computer device and storage medium
CN110956626A (en) * 2019-12-09 2020-04-03 北京推想科技有限公司 Image-based prognosis evaluation method and device
CN112132801A (en) * 2020-09-18 2020-12-25 上海市肺科医院 Lung bullae focus detection method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨丽洋;文戈;: "深度学习在医学影像中的应用", 分子影像学杂志, no. 02 *

Also Published As

Publication number Publication date
CN113781597B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
Pranata et al. Deep learning and SURF for automated classification and detection of calcaneus fractures in CT images
Heutink et al. Multi-Scale deep learning framework for cochlea localization, segmentation and analysis on clinical ultra-high-resolution CT images
CN109754387B (en) Intelligent detection and positioning method for whole-body bone imaging radioactive concentration focus
CN111539944B (en) Method, device, electronic equipment and storage medium for acquiring statistical attribute of lung focus
CN107545584B (en) Method, device and system for positioning region of interest in medical image
CN110796613B (en) Automatic identification method and device for image artifacts
JP4702971B2 (en) Computer-aided diagnosis system
US8290568B2 (en) Method for determining a property map of an object, particularly of a living being, based on at least a first image, particularly a magnetic resonance image
US9710907B2 (en) Diagnosis support system using panoramic radiograph and diagnosis support program using panoramic radiograph
Wani et al. Computer-aided diagnosis systems for osteoporosis detection: a comprehensive survey
CN110796636A (en) CT image bone condition detection method and device based on convolutional neural network
CN106991694A (en) Based on marking area area matched heart CT and ultrasound image registration method
CN111340825A (en) Method and system for generating mediastinal lymph node segmentation model
CN113288186A (en) Deep learning algorithm-based breast tumor tissue detection method and device
US8331635B2 (en) Cartesian human morpho-informatic system
Zeng et al. TUSPM-NET: A multi-task model for thyroid ultrasound standard plane recognition and detection of key anatomical structures of the thyroid
CN113781597B (en) Focus identification method, equipment and medium for lung CT image
TWI490790B (en) Dynamic cardiac imaging analysis and cardiac function assessment system
CN109615656A (en) A kind of backbone localization method based on pattern search
Banik et al. Computer-aided detection of architectural distortion in prior mammograms of interval cancer
KR102136107B1 (en) Apparatus and method for alignment of bone suppressed chest x-ray image
US9808175B1 (en) Method and system for analyzing images to quantify brain atrophy
JP7240845B2 (en) Image processing program, image processing apparatus, and image processing method
CN115578285B (en) Mammary gland molybdenum target image detail enhancement method and system
CN112907551B (en) Disease evolution method and device based on ultrasonic detection image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant