CN112634226B - Head CT image detection device, method, electronic device and storage medium - Google Patents

Head CT image detection device, method, electronic device and storage medium Download PDF

Info

Publication number
CN112634226B
CN112634226B CN202011514659.0A CN202011514659A CN112634226B CN 112634226 B CN112634226 B CN 112634226B CN 202011514659 A CN202011514659 A CN 202011514659A CN 112634226 B CN112634226 B CN 112634226B
Authority
CN
China
Prior art keywords
disease
head
image
symptom
classification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011514659.0A
Other languages
Chinese (zh)
Other versions
CN112634226A (en
Inventor
陈凯星
周鑫
毋戈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011514659.0A priority Critical patent/CN112634226B/en
Publication of CN112634226A publication Critical patent/CN112634226A/en
Application granted granted Critical
Publication of CN112634226B publication Critical patent/CN112634226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to the technical field of medical science and technology, and particularly provides a head CT image detection device, a head CT image detection method, electronic equipment and a storage medium, wherein the head CT image detection device comprises the following components: the feature extraction module is used for extracting features of the acquired head CT image to obtain a target feature map; the image segmentation module is used for segmenting the head CT image based on the target feature image to obtain a head disease seed symptom image; and the joint classification module is used for carrying out classification processing on the head disease symptom image to obtain classification results of symptoms, classification results of disease types and classification results of association relations between the symptoms and the disease types. The embodiment of the application is beneficial to solving the problem of error classification of the disease types and symptoms caused by the common disease and different disease in the detection of the head disease types and the symptoms, and is beneficial to improving the accuracy of multi-disease multi-symptom detection.

Description

Head CT image detection device, method, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a head CT image detection apparatus, a head CT image detection method, an electronic device, and a storage medium.
Background
Computerized tomography (Computed Tomography, CT) is an advanced medical imaging technique that has been used more widely for the examination and diagnosis of diseases. With the development of high-performance computing devices and artificial intelligence technology, intelligent auxiliary diagnosis on CT images based on the artificial intelligence technology has become possible, and a great deal of research in the field is the detection of multiple diseases and multiple symptoms. In the prior art, a concept of dividing and treating multiple symptoms is mostly adopted in the detection of multiple diseases, namely, multiple diseases and multiple symptoms are used as independent tasks to be detected and analyzed, so that intelligent auxiliary diagnosis on CT influence is realized, but the accuracy of the detection of the multiple diseases and the multiple symptoms is not high.
Disclosure of Invention
In view of the above problems, the present application provides a head CT image detection apparatus, a method, an electronic device, and a storage medium, which are beneficial to improving the accuracy of head multi-disease multi-sign detection.
To achieve the above object, a first aspect of an embodiment of the present application provides a head CT image detection method, including:
extracting features of the acquired head CT image to obtain a target feature map;
Dividing the head CT image based on the target feature image to obtain a head disease symptom image;
And classifying the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease and a classification result of the association relation between the symptom and the disease.
With reference to the first aspect, in a possible implementation manner, the classifying the head disease symptom image to obtain a classification result of a symptom, a classification result of a disease kind, and a classification result of a classification result symptom of an association relationship between a symptom and a disease kind includes:
extracting the corresponding characteristics of the head disease seed symptom image;
inputting the corresponding features of the head disease symptom image into a full-connection layer for classification, and obtaining a classification result of the symptom through processing of multiple classification functions, wherein the classification result of the symptom comprises the category of the symptom, a detection frame of the symptom and a confidence degree of the symptom contained in the detection frame;
Inputting the corresponding features of the head disease symptom image into a full convolution network for classification, obtaining a classification result of the disease through processing of multiple classification functions, and obtaining a classification result of the association relation between the symptom and the disease through processing of the two classification functions; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0.
With reference to the first aspect, in one possible implementation manner, the classification result of the symptom, the classification result of the disease and the classification result of the association relationship between the symptom and the disease are obtained by classifying the head CT image by using a trained neural network model; the method further comprises the steps of:
calculating a first loss according to the classification result of the symptom and a first gold standard;
calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard;
And adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
With reference to the first aspect, in one possible implementation manner, the second loss is calculated using the following formula:
where LOSS is the second LOSS, s is the size of the grid, B is the output detection frame, Indicating that head disease symptoms exist in the jth preset anchor frame of the ith grid, wherein the value is 1, if the head disease symptoms do not exist, the value is 0, p i indicates the classification probability of the symptoms at the ith grid, q i indicates the classification probability of the disease symptoms at the ith grid,/>And representing the predicted disease type mask, wherein the dice is an index for measuring the prediction precision of the disease type mask.
With reference to the first aspect, in a possible implementation manner, the segmenting the head CT image based on the target feature map to obtain a head disease seed sign image includes:
obtaining a candidate region in the head CT image based on the target feature map and a preset anchor frame;
the head disease seed image is segmented based on the candidate region.
With reference to the first aspect, in a possible implementation manner, the feature extraction of the acquired CT image of the head to obtain a target feature map includes:
dividing the head CT image to obtain N intracranial bitmaps; each intracranial bitmap in the N Zhang Lu internal bitmaps comprises a location within the cranium, and the location within each intracranial bitmap is not repeated; n is an integer greater than 1.
Performing enhancement treatment on each intracranial bitmap to obtain N enhanced intracranial bitmaps;
Obtaining a head CT image to be processed based on the N reinforced intracranial bitmaps;
And extracting the characteristics of the CT image of the head to be processed to obtain the target characteristic image.
With reference to the first aspect, in a possible implementation manner, the obtaining a CT image of the head to be processed based on the enhanced N intracranial bitmaps includes:
Performing overlapping detection on each two intracranial bitmaps in the N enhanced intracranial bitmaps to obtain an overlapping region of each two intracranial bitmaps;
acquiring gray values corresponding to pixel points in the overlapping area, and calculating an average value of the gray values;
replacing gray values corresponding to pixel points in the overlapping area by using the average value to obtain N bitmap to be pasted back to the inside of the cranium;
And pasting the N bitmaps to be pasted back to the head CT image according to the position of the N Zhang Lu internal bitmaps in the head CT image, so as to obtain the head CT image to be processed.
A second aspect of an embodiment of the present application provides a head CT image detection apparatus, including:
the feature extraction module is used for extracting features of the acquired head CT image to obtain a target feature map;
the image segmentation module is used for segmenting the head CT image based on the target feature image to obtain a head disease seed symptom image;
And the joint classification module is used for carrying out classification processing on the head disease symptom image to obtain classification results of symptoms, classification results of disease types and classification results of association relations between the symptoms and the disease types.
With reference to the second aspect, in one possible implementation manner, in performing classification processing on the head disease symptom image to obtain a classification result of a symptom, a classification result of a disease seed, and a classification result of an association relationship between a symptom and a disease seed, the joint classification module is specifically configured to:
extracting the corresponding characteristics of the head disease seed symptom image;
inputting the corresponding features of the head disease symptom image into a full-connection layer for classification, and obtaining a classification result of the symptom through processing of multiple classification functions, wherein the classification result of the symptom comprises the category of the symptom, a detection frame of the symptom and a confidence degree of the symptom contained in the detection frame;
Inputting the corresponding features of the head disease symptom image into a full convolution network for classification, obtaining a classification result of the disease through processing of multiple classification functions, and obtaining a classification result of the association relation between the symptom and the disease through processing of the two classification functions; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes a parameter adjustment module; the parameter adjustment module is used for:
calculating a first loss according to the classification result of the symptom and a first gold standard;
calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard;
And adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
With reference to the second aspect, in one possible implementation manner, the parameter adjustment module calculates the second loss using the following formula:
where LOSS is the second LOSS, s is the size of the grid, B is the output detection frame, Indicating that head disease symptoms exist in the jth preset anchor frame of the ith grid, wherein the value is 1, if the head disease symptoms do not exist, the value is 0, p i indicates the classification probability of the symptoms at the ith grid, q i indicates the classification probability of the disease symptoms at the ith grid,/>And representing the predicted disease type mask, wherein the dice is an index for measuring the prediction precision of the disease type mask.
With reference to the second aspect, in one possible implementation manner, in segmenting the head CT image based on the target feature map to obtain a head disease seed sign image, the image segmentation module is specifically configured to:
obtaining a candidate region in the head CT image based on the target feature map and a preset anchor frame;
the head disease seed image is segmented based on the candidate region.
With reference to the second aspect, in one possible implementation manner, in performing feature extraction on an acquired CT image of the head to obtain a target feature map, the feature extraction module is specifically configured to:
dividing the head CT image to obtain N intracranial bitmaps; each intracranial bitmap in the N Zhang Lu internal bitmaps comprises a location within the cranium, and the location within each intracranial bitmap is not repeated; n is an integer greater than 1.
Performing enhancement treatment on each intracranial bitmap to obtain N enhanced intracranial bitmaps;
Obtaining a head CT image to be processed based on the N reinforced intracranial bitmaps;
And extracting the characteristics of the CT image of the head to be processed to obtain the target characteristic image.
With reference to the second aspect, in one possible implementation manner, in obtaining a CT image of the head to be processed based on the enhanced N intracranial bitmaps, the feature extraction module is specifically configured to:
Performing overlapping detection on each two intracranial bitmaps in the N enhanced intracranial bitmaps to obtain an overlapping region of each two intracranial bitmaps;
acquiring gray values corresponding to pixel points in the overlapping area, and calculating an average value of the gray values;
replacing gray values corresponding to pixel points in the overlapping area by using the average value to obtain N bitmap to be pasted back to the inside of the cranium;
And pasting the N bitmaps to be pasted back to the head CT image according to the position of the N Zhang Lu internal bitmaps in the head CT image, so as to obtain the head CT image to be processed.
A third aspect of the embodiments of the present application provides an electronic device, including an input device and an output device, and further including a processor adapted to implement one or more instructions; and a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
extracting features of the acquired head CT image to obtain a target feature map;
Dividing the head CT image based on the target feature image to obtain a head disease symptom image;
And classifying the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease and a classification result of the association relation between the symptom and the disease.
A fourth aspect of the embodiments of the present application provides a computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the steps of:
extracting features of the acquired head CT image to obtain a target feature map;
Dividing the head CT image based on the target feature image to obtain a head disease symptom image;
And classifying the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease and a classification result of the association relation between the symptom and the disease.
The scheme of the application at least comprises the following beneficial effects: compared with the prior art, the embodiment of the application obtains the target feature map by extracting the features of the acquired head CT image; dividing the head CT image based on the target feature image to obtain a head disease symptom image; and classifying the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease and a classification result of the association relation between the symptom and the disease. Therefore, in the problem of classifying multiple diseases and multiple symptoms of the head, the classification of the symptoms and the classification of the disease types are not treated as two independent tasks, but are carried out in one task for detection and classification, the association relationship between the disease types and the symptoms is fully considered, the problem of error classification of the disease types and the symptoms caused by the same disease and different disease and symptoms in the detection of the head disease types and the symptoms is solved, and the accuracy of the detection of the multiple diseases and the multiple symptoms is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application environment according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a head CT image detection method according to an embodiment of the present application;
FIG. 3 is an exemplary view of a segmented intracranial region according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of a neural network model according to an embodiment of the present application;
FIG. 5 is a flowchart of another method for detecting CT images of a head according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a head CT image detection apparatus according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of another head CT image detection apparatus according to an embodiment of the present application;
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The terms "comprising" and "having" and any variations thereof, as used in the description, claims and drawings, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
The embodiment of the application provides a head CT image detection method, which can be implemented based on an application environment shown in fig. 1, please refer to fig. 1, wherein the application environment comprises a CT contrast device and an electronic device, the CT contrast device is used for collecting head CT images of a patient, and the electronic device is used for carrying out a series of detection and classification processes on the head CT images collected by the CT contrast device so as to carry out joint classification on head disease types and symptoms of the patient, so that multiple symptoms and multiple disease types can be detected, and the corresponding relation between the multiple symptoms and the multiple disease types can be detected. The CT contrast device comprises an imaging layer and a communication layer, the electronic device comprises a communication layer and a processing layer, the communication layer of the CT contrast device and the communication layer of the electronic device are both provided with data protocol interfaces, based on the data protocol interfaces, the communication layer of the CT contrast device can transmit head CT images acquired by the imaging layer to the communication layer of the electronic device through a wired or wireless network, the communication layer of the electronic device sends the acquired head CT images to the processing layer, the processing layer performs feature extraction on the head CT images to obtain target feature images, then the head CT images are segmented to obtain head disease symptom images, the symptoms are classified based on the head disease symptom images, the disease types are classified, the association relation between the symptoms and the disease types is classified, and a final classification result is output for reference of medical staff. As the association relation between the disease seeds and the symptoms is fully considered in the detection and classification processes, the problem of error classification of the disease seeds and the symptoms caused by the same disease and different symptoms in the head disease seed and the symptom detection is solved, and the accuracy of multi-disease multi-symptom detection is improved.
Based on the application environment shown in fig. 1, the method for detecting a head CT image according to the embodiment of the present application is described in detail below with reference to other drawings.
Referring to fig. 2, fig. 2 is a flowchart of a head CT image detection method according to an embodiment of the present application, where the method is applied to an electronic device, as shown in fig. 2, and includes steps S21-S23:
s21, extracting features of the acquired head CT image to obtain a target feature map.
In the specific embodiment of the application, the head CT image can be acquired at present, or can be acquired in history, or can be acquired by a third party after being acquired by the third party, and can be a two-dimensional image or a three-dimensional image. The head CT image is one of the common methods for detecting whether a patient has brain diseases, at present, whether the patient has the brain diseases or not and the exact disease type of the brain diseases are judged by directly or indirectly showing the head CT image mainly through medical workers with abundant experience, but the detection efficiency is extremely low, and the accuracy or the comprehensiveness of the detection is difficult to ensure due to the factors such as the complex head structure. In view of this, here, a computer is used to process the head CT image, and different neural networks are selected as the infrastructure according to the dimensions of the head CT image, for example: the two-dimensional head CT image can select YOLO-V3 (You Only Look Once-V3, glance at the target detector V3), RETINANET, EFFICIENTDET and other networks, if the three-dimensional head CT image is a three-dimensional head CT image, the convolution kernel of the selected neural network needs to be adjusted to be a 3-dimensional convolution kernel, and preferably, the neural network for single-stage target detection is used as a basic framework to improve the detection speed and save the occupation of a video memory. It should be appreciated that the selected neural network includes backbone network portions, such as: the backbone network of YOLO-V3 may employ Darknet and the backbone network of RETINANET may employ ResNet, and the head CT image is input into the backbone network portion of the preset neural network to perform feature extraction, so as to obtain the target feature map.
In one possible implementation manner, the feature extraction of the acquired CT image of the head to obtain the target feature map includes:
dividing the head CT image to obtain N intracranial bitmaps; each intracranial bitmap in the N Zhang Lu internal bitmaps comprises a location within the cranium, and the location within each intracranial bitmap is not repeated; n is an integer greater than 1.
Performing enhancement treatment on each intracranial bitmap to obtain N enhanced intracranial bitmaps;
Obtaining a head CT image to be processed based on the N reinforced intracranial bitmaps;
And extracting the characteristics of the CT image of the head to be processed to obtain the target characteristic image.
As shown in fig. 3, a head CT image can be used to divide multiple parts such as intracranial brain leaves, ventricles, and brain pools by image detection and segmentation algorithms, and a small image of each part is an intracranial bitmap. Traversing each pixel point in each intracranial bitmap to calculate the histogram distribution of each intracranial bitmap, cutting off the part exceeding the preset value and evenly distributing the part to each gray level under the condition that a certain gray level in the histogram distribution exceeds the preset value, then partitioning each intracranial bitmap, calculating the histogram distribution of each block, finding four adjacent windows for each pixel point in an image, calculating the mapping value of the four window histogram distribution to each pixel point, and then carrying out bilinear interpolation to calculate the final mapping value of each pixel point, thereby completing the enhancement of each intracranial bitmap.
Further, obtaining the CT image of the head to be processed based on the enhanced N intracranial bitmaps includes:
Performing overlapping detection on each two intracranial bitmaps in the N enhanced intracranial bitmaps to obtain an overlapping region of each two intracranial bitmaps;
acquiring gray values corresponding to pixel points in the overlapping area, and calculating an average value of the gray values;
replacing gray values corresponding to pixel points in the overlapping area by using the average value to obtain N bitmap to be pasted back to the inside of the cranium;
And pasting the N bitmaps to be pasted back to the head CT image according to the position of the N Zhang Lu internal bitmaps in the head CT image, so as to obtain the head CT image to be processed.
Specifically, please continue to refer to fig. 3, for the enhanced N intracranial bitmaps, overlapping regions are detected two by two, for example: the method of image registration can be used for detecting the overlapping area, or the method of image registration can be used for detecting regional feature density of the two reinforced intracranial bitmaps, determining the area with the feature density larger than or equal to a threshold value as the overlapping area, for example, under the condition that the two reinforced intracranial bitmaps are different in size, edge filling is carried out on one piece with smaller size, so that the two reinforced intracranial bitmaps are identical in size, then sliding on one piece with a preset window to select the area to be detected, calculating the feature density in the area to be detected, determining the corresponding area of the area to be detected on the other piece, calculating the feature density of the corresponding area, and calculating the ratio of the feature densities of the two areas, thereby detecting the overlapping area, wherein the size of the preset window can be defined. Because the overlapping area may be the background area of each intracranial part, the influence on the later detection and classification is small, the gray value corresponding to the pixel point in the overlapping area can be replaced by the average value of the gray values, and then N pieces of bitmap to be pasted back to the intracranial part are correspondingly pasted back to the original head CT image according to the position when being segmented, so that each intracranial part is highlighted, and the important intracranial part can be positioned more quickly when the later disease sign is detected.
In addition, the trained neural network model is adopted to conduct feature extraction on the CT image of the head to be processed, and a target feature map is obtained. Optionally, taking YOLO-V3 as an example to describe the target feature map, where the target feature map includes a target feature map of a first scale, a target feature map of a second scale, and a target feature map of a third scale, performing convolution processing on the head CT image by using 53 convolution layers, performing 32 times downsampling on the feature map of 79 layers to obtain the target feature map of the first scale, performing upsampling on the feature map of 79 layers, then fusing with the feature map of 62 layers to obtain the feature map of 91 layers, performing 16 times downsampling on the feature map of 91 layers to obtain the target feature map of the second scale, performing upsampling on the feature map of 91 layers, fusing the upsampled feature map with the feature map of 36 layers, and performing 8 times downsampling on the fused feature map to obtain the target feature map of the third scale.
S22, segmenting the head CT image based on the target feature image to obtain a head disease sign image.
In a specific embodiment of the present application, the head disease symptom image refers to an image obtained based on a region of interest of the head CT image, and the region of interest refers to a region where the head disease symptom appears. Wherein the head disease seed image comprises at least one of a congenital brain lesion image, a head trauma image, a cerebrovascular disease image, an intracranial tumor image, an intracranial infection image, and an intracranial inflammation image, and of course, other head disease seed images, which are not listed one by one.
In one possible implementation manner, the segmenting the head CT image based on the target feature map to obtain a head disease seed sign image includes:
obtaining a candidate region in the head CT image based on the target feature map and a preset anchor frame;
the head disease seed image is segmented based on the candidate region.
Specifically, the head CT image is divided into a plurality of grids based on the size of the target feature map, for example: if the size of the target feature map is 7*7, dividing the head CT image into 7*7 grids, generating k preset anchor frames in the head CT image with each grid of the multiple grids as a center, determining a region framed by a target preset anchor frame of the k preset anchor frames as the candidate region, where the target preset anchor frame is an anchor frame containing head disease symptoms, and specifically may be determined according to a predicted confidence level of each preset anchor frame, for example: and if the confidence coefficient is 1, determining that the preset anchor frame contains head disease symptoms, and if the confidence coefficient is 0, determining that the preset anchor frame does not contain head disease symptoms. After the candidate region is obtained, the candidate region can be quantized to obtain the region of interest, and particularly, the candidate region can be mapped to the target feature map by adopting a RoIAlign method to obtain the region of interest, and the corresponding position of the region of interest in the head CT image is segmented to obtain the head disease sign image.
S23, classifying the head disease symptom image to obtain a classification result of symptoms, a classification result of disease types and a classification result of association relation between the symptoms and the disease types.
In a specific embodiment of the present application, as shown in fig. 4, the neural network model mainly includes a backbone network part and a network output part, the backbone network part is mainly used for extracting features of an input head CT image, most of the operations are convolution operations, and the network output part further increases the output of the classification result of the disease seeds by two branches and the classification result of the association relationship between the symptoms and the disease seeds on the basis that only the classification result of the symptoms is output in the prior art.
When classifying head disease symptom images, extracting corresponding features (namely features of an interested region) of the head disease symptom images, inputting the features corresponding to the head disease symptom images into a full-connection layer for classification prediction, and obtaining classification results of the symptoms through processing of multiple classification functions, wherein the classification results of the symptoms comprise classification of the symptoms, a detection frame of the symptoms and confidence level of the detection frame containing the symptoms; inputting the corresponding characteristics of the head disease symptom image into a full convolution network for classification prediction, and obtaining a classification result of the disease (namely, which disease the patient specifically suffers from) through the processing of multiple classification functions, wherein the classification result of the disease is a predicted disease mask; and obtaining a classification result of the association relation between the symptom and the disease type through processing of a classification function, namely, whether a symptom is a symptom of a disease type or not is1, and if not, 0. Wherein the multi-classification function may be a softmax function and the bi-classification function may be a sigmoid function. Because two branches are added on the basic structure of the selected neural network, one branch is used for predicting head disease seed masks, namely, classifying head disease seeds, and the other branch is used for classifying the association relation between head disease seeds and symptoms, the classification result of the disease seeds and the classification result of the association relation between the symptoms and the disease seeds can be output on the basis of the original output result (namely, the classification result of the symptoms) of the selected neural network, and therefore, the association relation between the multiple symptoms and the multiple disease seeds is fully considered when the head CT image is classified into multiple symptoms and multiple disease seeds.
In one possible embodiment, the method further comprises:
calculating a first loss according to the classification result of the symptom and a first gold standard;
calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard;
And adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
It should be understood that the first gold standard is the gold standard of the medical head disease category, and the second gold standard is the gold standard of the association of head disease with the symptoms, i.e. the gold standard of the correspondence of head disease symptoms with the disease categories. The first penalty is a penalty of the selected infrastructure, such as: in the case where YOLO-V3 is selected as the infrastructure of the present application, the first penalty is that of YOLO-V3. The second loss is the loss of the association relation between head disease and symptoms, and the calculation formula is as follows: wherein LOSS is the second LOSS, s represents the size of the grid, B represents the output detection frame, and/(I) Indicating that head disease symptoms exist in the jth preset anchor frame of the ith grid, wherein the value is 1, if the head disease symptoms do not exist, the value is 0, p i indicates the classification probability of the symptoms at the ith grid, q i indicates the classification probability of the disease symptoms at the ith grid,/>The race is an index for measuring the prediction accuracy of the disease mask, and the threshold value is usually 0.5. And updating parameters of the neural network model by adopting a back propagation algorithm according to at least one of the first loss and the second loss, and continuously iterating until the neural network model converges.
It can be seen that, in the embodiment of the application, under the condition of acquiring a head CT image, the acquired head CT image is subjected to feature extraction to obtain a target feature image, then the head CT image is segmented based on the target feature image to obtain a head disease sign image, and finally the head disease sign image is subjected to classification processing to obtain a classification result of the sign, a classification result of the disease and a classification result of the association relationship between the sign and the disease. Therefore, head multiple diseases and multiple symptoms are classified and placed in one task, the detection efficiency is improved, meanwhile, the association relation between the multiple diseases and the multiple symptoms is fully considered, the symptoms are fully corresponding to the disease types, the problems of common-disease different symptoms and different-disease common symptoms are solved, and the accuracy of the multiple-disease multiple-symptom detection is improved.
In one embodiment of the present application, the head CT image detection scheme of the present application may be applied to the field of intelligent medical treatment, for example, by receiving a head CT image acquired by an electronic computed tomography scanner, and performing detection classification on the head CT image by using the head CT image detection method of the present application, to obtain a final classification result. The head CT image detection method can fully correspond head multiple diseases to multiple symptoms, can solve the problems of common disease and different disease and common symptoms, can provide more accurate basis for the diagnosis of medical staff, and improves the accuracy of head disease diagnosis.
Referring to fig. 5, a flowchart of another head CT image detection method provided by the embodiment of the present application in fig. 5 can be implemented based on the application environment shown in fig. 1, as shown in fig. 5, and includes steps S51-S54:
s51, extracting features of the acquired head CT image to obtain a target feature map;
S52, segmenting the head CT image based on the target feature image to obtain a head disease seed symptom image;
s53, extracting the corresponding features of the head disease seed symptom image;
S54, inputting the features corresponding to the head disease symptom images into a full-connection layer for classification, and obtaining classification results of the symptoms through processing of multiple classification functions.
Wherein the classification result of the symptom comprises the category of the symptom, a detection frame of the symptom and a confidence degree of the symptom contained in the detection frame.
S55, inputting the features corresponding to the head disease symptom images into a full convolution network for classification, obtaining a classification result of the disease by processing a multi-classification function, and obtaining a classification result of the association relation between the symptoms and the disease by processing a bi-classification function.
Wherein the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0.
The specific implementation of steps S51-S55 is described in the embodiment shown in fig. 2, and the same or similar advantages can be achieved, and for avoiding repetition, the description is omitted here.
Based on the description of the embodiment of the head CT image detection method, please refer to fig. 6, fig. 6 is a schematic structural diagram of a head CT image detection device according to an embodiment of the present application, as shown in fig. 6, the device includes:
The feature extraction module 61 is configured to perform feature extraction on the acquired head CT image to obtain a target feature map;
An image segmentation module 62, configured to segment the head CT image based on the target feature map, to obtain a head disease seed image;
And the joint classification module 63 is used for performing classification processing on the head disease symptom image to obtain classification results of symptoms, classification results of disease types and classification results of association relations between the symptoms and the disease types.
In one possible implementation manner, in terms of classifying the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease, and a classification result of an association relationship between the symptom and the disease, the joint classification module 63 is specifically configured to:
extracting the corresponding characteristics of the head disease seed symptom image;
inputting the corresponding features of the head disease symptom image into a full-connection layer for classification, and obtaining a classification result of the symptom through processing of multiple classification functions, wherein the classification result of the symptom comprises the category of the symptom, a detection frame of the symptom and a confidence degree of the symptom contained in the detection frame;
Inputting the corresponding features of the head disease symptom image into a full convolution network for classification, obtaining a classification result of the disease through processing of multiple classification functions, and obtaining a classification result of the association relation between the symptom and the disease through processing of the two classification functions; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0.
In one possible embodiment, as shown in fig. 7, the apparatus further includes a parameter adjustment module 64; the parameter adjustment module 64 is configured to:
calculating a first loss according to the classification result of the symptom and a first gold standard;
calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard;
And adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
In one possible implementation, the parameter adjustment module 64 calculates the second loss using the following formula:
where LOSS is the second LOSS, s is the size of the grid, B is the output detection frame, Indicating that head disease symptoms exist in the jth preset anchor frame of the ith grid, wherein the value is 1, if the head disease symptoms do not exist, the value is 0, p i indicates the classification probability of the symptoms at the ith grid, q i indicates the classification probability of the disease symptoms at the ith grid,/>And representing the predicted disease type mask, wherein the dice is an index for measuring the prediction precision of the disease type mask.
In one possible implementation, in segmenting the head CT image based on the target feature map to obtain a head disease seed image, the image segmentation module 62 is specifically configured to:
obtaining a candidate region in the head CT image based on the target feature map and a preset anchor frame;
the head disease seed image is segmented based on the candidate region.
In one possible implementation, in performing feature extraction on the acquired CT image of the head to obtain the target feature map, the feature extraction module 61 is specifically configured to:
dividing the head CT image to obtain N intracranial bitmaps; each intracranial bitmap in the N Zhang Lu internal bitmaps comprises a location within the cranium, and the location within each intracranial bitmap is not repeated; n is an integer greater than 1.
Performing enhancement treatment on each intracranial bitmap to obtain N enhanced intracranial bitmaps;
Obtaining a head CT image to be processed based on the N reinforced intracranial bitmaps;
And extracting the characteristics of the CT image of the head to be processed to obtain the target characteristic image.
In one possible implementation, the feature extraction module 61 is specifically configured to, in obtaining a CT image of the head to be processed based on the enhanced N intracranial bitmaps:
Performing overlapping detection on each two intracranial bitmaps in the N enhanced intracranial bitmaps to obtain an overlapping region of each two intracranial bitmaps;
acquiring gray values corresponding to pixel points in the overlapping area, and calculating an average value of the gray values;
replacing gray values corresponding to pixel points in the overlapping area by using the average value to obtain N bitmap to be pasted back to the inside of the cranium;
And pasting the N bitmaps to be pasted back to the head CT image according to the position of the N Zhang Lu internal bitmaps in the head CT image, so as to obtain the head CT image to be processed.
According to an embodiment of the present application, each unit of the head CT image detection apparatus shown in fig. 6 or 7 may be separately or completely combined into one or several additional units, or some unit(s) thereof may be further split into a plurality of units having smaller functions, which may achieve the same operation without affecting the achievement of the technical effects of the embodiments of the present application. The above units are divided based on logic functions, and in practical applications, the functions of one unit may be implemented by a plurality of units, or the functions of a plurality of units may be implemented by one unit. In other embodiments of the present application, the head-based CT image detection apparatus may also include other units, and in practical applications, these functions may also be implemented with assistance by other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present application, the head CT image detection apparatus device as shown in fig. 6 or fig. 7 may be constructed by running a computer program (including program code) capable of executing the steps involved in the respective methods as shown in fig. 2 or fig. 5 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and the like, and a storage element, and the head CT image detection method of the embodiment of the present application is implemented. The computer program may be recorded on, for example, a computer-readable recording medium, and loaded into and executed by the above-described computing device via the computer-readable recording medium.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application also provides electronic equipment. Referring to fig. 8, the electronic device includes at least a processor 81, an input device 82, an output device 83, and a computer storage medium 84. Wherein the processor 81, input device 82, output device 83, and computer storage medium 84 within the electronic device may be connected by a bus or other means.
The computer storage medium 84 may be stored in a memory of the electronic device, the computer storage medium 84 being for storing a computer program comprising program instructions, the processor 81 being for executing the program instructions stored by the computer storage medium 84. The processor 81, or CPU (Central Processing Unit )) is a computing core as well as a control core of the electronic device, which is adapted to implement one or more instructions, in particular to load and execute one or more instructions to implement a corresponding method flow or a corresponding function.
In one embodiment, the processor 81 of the electronic device provided in the embodiment of the present application may be used to perform a series of detection and classification processes of CT images of the head:
extracting features of the acquired head CT image to obtain a target feature map;
Dividing the head CT image based on the target feature image to obtain a head disease symptom image;
And classifying the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease and a classification result of the association relation between the symptom and the disease.
In still another embodiment, the processor 81 performs the classification processing on the head disease symptom image to obtain a classification result of the symptom, a classification result of the disease, and a classification result of the classification result symptom of the association relationship between the symptom and the disease, including:
extracting the corresponding characteristics of the head disease seed symptom image;
inputting the corresponding features of the head disease symptom image into a full-connection layer for classification, and obtaining a classification result of the symptom through processing of multiple classification functions, wherein the classification result of the symptom comprises the category of the symptom, a detection frame of the symptom and a confidence degree of the symptom contained in the detection frame;
Inputting the corresponding features of the head disease symptom image into a full convolution network for classification, obtaining a classification result of the disease through processing of multiple classification functions, and obtaining a classification result of the association relation between the symptom and the disease through processing of the two classification functions; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0.
In yet another embodiment, the classification result of the symptom, the classification result of the disease and the classification result of the association relationship between the symptom and the disease are obtained by classifying the head CT image using a trained neural network model; the processor 81 is further configured to perform:
calculating a first loss according to the classification result of the symptom and a first gold standard;
calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard;
And adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
In yet another embodiment, the processor 81 calculates the second loss using the following formula:
where LOSS is the second LOSS, s is the size of the grid, B is the output detection frame, Indicating that head disease symptoms exist in the jth preset anchor frame of the ith grid, wherein the value is 1, if the head disease symptoms do not exist, the value is 0, p i indicates the classification probability of the symptoms at the ith grid, q i indicates the classification probability of the disease symptoms at the ith grid,/>And representing the predicted disease type mask, wherein the dice is an index for measuring the prediction precision of the disease type mask.
In yet another embodiment, the processor 81 performs the segmentation of the head CT image based on the target feature map to obtain a head disease seed image, including:
obtaining a candidate region in the head CT image based on the target feature map and a preset anchor frame;
the head disease seed image is segmented based on the candidate region.
In yet another embodiment, the processor 81 performs the feature extraction on the acquired CT image of the head to obtain a target feature map, including:
dividing the head CT image to obtain N intracranial bitmaps; each intracranial bitmap in the N Zhang Lu internal bitmaps comprises a location within the cranium, and the location within each intracranial bitmap is not repeated; n is an integer greater than 1.
Performing enhancement treatment on each intracranial bitmap to obtain N enhanced intracranial bitmaps;
Obtaining a head CT image to be processed based on the N reinforced intracranial bitmaps;
And extracting the characteristics of the CT image of the head to be processed to obtain the target characteristic image.
In yet another embodiment, the processor 81 executing the obtaining the CT image of the head to be processed based on the enhanced N intracranial bitmaps includes:
Performing overlapping detection on each two intracranial bitmaps in the N enhanced intracranial bitmaps to obtain an overlapping region of each two intracranial bitmaps;
acquiring gray values corresponding to pixel points in the overlapping area, and calculating an average value of the gray values;
replacing gray values corresponding to pixel points in the overlapping area by using the average value to obtain N bitmap to be pasted back to the inside of the cranium;
And pasting the N bitmaps to be pasted back to the head CT image according to the position of the N Zhang Lu internal bitmaps in the head CT image, so as to obtain the head CT image to be processed.
By way of example, the electronic devices described above may be servers, cloud servers, computer hosts, server clusters, etc., including but not limited to processors 81, input devices 82, output devices 83, and computer storage media 84. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of an electronic device and is not limiting of an electronic device, and may include more or fewer components than shown, or certain components may be combined, or different components.
It should be noted that, since the steps in the above-described head CT image detection method are implemented when the processor 81 of the electronic device executes the computer program, the embodiments of the above-described head CT image detection method are all applicable to the electronic device, and all achieve the same or similar beneficial effects.
The embodiment of the application also provides a computer storage medium (Memory), which is a Memory device in the electronic device and is used for storing programs and data. It will be appreciated that the computer storage medium herein may include both a built-in storage medium in the terminal and an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), adapted to be loaded and executed by the processor 81. The computer storage medium herein may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory; alternatively, it may be at least one computer storage medium located remotely from the aforementioned processor 81. In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 81 to implement the corresponding steps described above with respect to the head CT image detection method.
The computer program of the computer storage medium may illustratively include computer program code, which may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
It should be noted that, since the steps in the above-mentioned head CT image detection method are implemented when the computer program of the computer storage medium is executed by the processor, all the embodiments of the above-mentioned head CT image detection method are applicable to the computer storage medium, and the same or similar beneficial effects can be achieved.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (7)

1. A head CT image detection apparatus, the apparatus comprising:
the feature extraction module is used for extracting features of the acquired head CT image to obtain a target feature map;
the image segmentation module is used for segmenting the head CT image based on the target feature image to obtain a head disease seed symptom image;
the joint classification module is used for carrying out classification processing on the head disease symptom image to obtain a classification result of symptoms, a classification result of disease types and a classification result of association relation between the symptoms and the disease types, and comprises the following steps:
extracting the corresponding characteristics of the head disease seed symptom image;
inputting the corresponding features of the head disease symptom image into a full-connection layer for classification, and obtaining a classification result of the symptom through the processing of multiple classification functions;
Inputting the corresponding features of the head disease symptom image into a full convolution network for classification, obtaining a classification result of the disease through processing of multiple classification functions, and obtaining a classification result of the association relation between the symptom and the disease through processing of the two classification functions;
The classification result of the symptoms comprises the category of the symptoms, a detection frame of the symptoms and the confidence that the detection frame contains the symptoms; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0;
The classification result of the symptoms, the classification result of the disease seeds and the classification result of the association relation between the symptoms and the disease seeds are obtained by classifying the head CT image by the joint classification module through a trained neural network model;
The parameter adjustment module is used for calculating a first loss according to the classification result of the symptom and a first gold standard; calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard; and adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
2. The apparatus according to claim 1, wherein in segmenting the head CT image based on the target feature map to obtain a head disease seed image, the image segmentation module is specifically configured to:
obtaining a candidate region in the head CT image based on the target feature map and a preset anchor frame;
the head disease seed image is segmented based on the candidate region.
3. The device according to claim 1 or 2, wherein, in terms of extracting features of the acquired CT image of the head to obtain a target feature map, the feature extraction module is specifically configured to:
dividing the head CT image to obtain N intracranial bitmaps; each intracranial bitmap in the N Zhang Lu internal bitmaps comprises a location within the cranium, and the location within each intracranial bitmap is not repeated; n is an integer greater than 1;
performing enhancement treatment on each intracranial bitmap to obtain N enhanced intracranial bitmaps;
Obtaining a head CT image to be processed based on the N reinforced intracranial bitmaps;
And extracting the characteristics of the CT image of the head to be processed to obtain the target characteristic image.
4. The apparatus according to claim 3, wherein the feature extraction module is specifically configured to, in obtaining a CT image of the head to be processed based on the enhanced N intracranial bitmaps:
Performing overlapping detection on each two intracranial bitmaps in the N enhanced intracranial bitmaps to obtain an overlapping region of each two intracranial bitmaps;
acquiring gray values corresponding to pixel points in the overlapping area, and calculating an average value of the gray values;
replacing gray values corresponding to pixel points in the overlapping area by using the average value to obtain N bitmap to be pasted back to the inside of the cranium;
Pasting the N bitmaps to be pasted back to the head CT image according to the position of the N Zhang Lu internal bitmaps in the head CT image to obtain the head CT image to be processed, wherein the classification result of the symptoms comprises the category of the symptoms, a detection frame of the symptoms and the confidence that the detection frame contains the symptoms; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0.
5. A method for detecting a CT image of a head, the method comprising:
extracting features of the acquired head CT image to obtain a target feature map;
Dividing the head CT image based on the target feature image to obtain a head disease symptom image;
Classifying the head disease symptom image to obtain classification results of symptoms, classification results of disease types and classification results of association relations between the symptoms and the disease types, wherein the classification results comprise:
extracting the corresponding characteristics of the head disease seed symptom image;
inputting the corresponding features of the head disease symptom image into a full-connection layer for classification, and obtaining a classification result of the symptom through the processing of multiple classification functions;
Inputting the corresponding features of the head disease symptom image into a full convolution network for classification, obtaining a classification result of the disease through processing of multiple classification functions, and obtaining a classification result of the association relation between the symptom and the disease through processing of the two classification functions;
The classification result of the symptoms comprises the category of the symptoms, a detection frame of the symptoms and the confidence that the detection frame contains the symptoms; the classification result of the disease seeds comprises a predicted disease seed mask; the classification result of the association relation between the symptom and the disease is used for indicating whether the symptom in the classification result of the symptom is the symptom of the disease in the classification result of the disease, if so, the value is 1, otherwise, the value is 0;
the classification result of the symptoms, the classification result of the disease seeds and the classification result of the association relation between the symptoms and the disease seeds are obtained by classifying the head CT image by adopting a trained neural network model;
calculating a first loss according to the classification result of the symptom and a first gold standard; calculating a second loss according to the classification result of the symptom, the classification result of the disease species, the classification result of the association relation between the symptom and the disease species and a second gold standard; and adjusting parameters of the neural network model according to at least one of the first loss and the second loss.
6. An electronic device comprising an input device and an output device, further comprising:
A processor adapted to implement one or more instructions; and
A computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the method of claim 5.
7. A computer storage medium storing one or more instructions adapted to be loaded by a processor and to perform the method of claim 5.
CN202011514659.0A 2020-12-18 2020-12-18 Head CT image detection device, method, electronic device and storage medium Active CN112634226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011514659.0A CN112634226B (en) 2020-12-18 2020-12-18 Head CT image detection device, method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011514659.0A CN112634226B (en) 2020-12-18 2020-12-18 Head CT image detection device, method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112634226A CN112634226A (en) 2021-04-09
CN112634226B true CN112634226B (en) 2024-05-14

Family

ID=75317868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011514659.0A Active CN112634226B (en) 2020-12-18 2020-12-18 Head CT image detection device, method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112634226B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147668B (en) * 2022-09-06 2022-12-27 北京鹰瞳科技发展股份有限公司 Training method of disease classification model, disease classification method and related products

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN110969613A (en) * 2019-12-06 2020-04-07 广州柏视医疗科技有限公司 Intelligent pulmonary tuberculosis identification method and system with image sign interpretation
CN111476775A (en) * 2020-04-07 2020-07-31 广州柏视医疗科技有限公司 DR symptom identification device and method
CN111968137A (en) * 2020-10-22 2020-11-20 平安科技(深圳)有限公司 Head CT image segmentation method and device, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363226A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 Ophthalmology disease classifying identification method, device and medium based on random forest
CN110969613A (en) * 2019-12-06 2020-04-07 广州柏视医疗科技有限公司 Intelligent pulmonary tuberculosis identification method and system with image sign interpretation
CN111476775A (en) * 2020-04-07 2020-07-31 广州柏视医疗科技有限公司 DR symptom identification device and method
CN111968137A (en) * 2020-10-22 2020-11-20 平安科技(深圳)有限公司 Head CT image segmentation method and device, electronic device and storage medium

Also Published As

Publication number Publication date
CN112634226A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN107886514B (en) Mammary gland molybdenum target image lump semantic segmentation method based on depth residual error network
Gunasekara et al. A systematic approach for MRI brain tumor localization and segmentation using deep learning and active contouring
Nie et al. Automatic detection of melanoma with yolo deep convolutional neural networks
Ahmmed et al. Classification of tumors and it stages in brain MRI using support vector machine and artificial neural network
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
CN110310287B (en) Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
CN112070781B (en) Processing method and device of craniocerebral tomography image, storage medium and electronic equipment
Duan et al. A novel GA-based optimized approach for regional multimodal medical image fusion with superpixel segmentation
CN115601602A (en) Cancer tissue pathology image classification method, system, medium, equipment and terminal
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
CN113450328A (en) Medical image key point detection method and system based on improved neural network
CN112132827A (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN112634226B (en) Head CT image detection device, method, electronic device and storage medium
CN116245832A (en) Image processing method, device, equipment and storage medium
Nazir et al. Machine Learning‐Based Lung Cancer Detection Using Multiview Image Registration and Fusion
Ramachandran et al. Mutual informative MapReduce and minimum quadrangle classification for brain tumor big data
CN113724185B (en) Model processing method, device and storage medium for image classification
Singh et al. Detection and classification of brain tumor using hybrid feature extraction technique
Majji et al. Smart iot in breast cancer detection using optimal deep learning
Kr Ghosh et al. Development of intuitionistic fuzzy special embedded convolutional neural network for mammography enhancement
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN115375583A (en) PET parameter image enhancement method, device, equipment and storage medium
Zhu et al. Learning classification of big medical imaging data based on partial differential equation
Mohanty et al. Fracture detection from X-ray images using different Machine Learning Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40041503

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant