CN112949654A - Image detection method and related device and equipment - Google Patents

Image detection method and related device and equipment Download PDF

Info

Publication number
CN112949654A
CN112949654A CN202110214861.XA CN202110214861A CN112949654A CN 112949654 A CN112949654 A CN 112949654A CN 202110214861 A CN202110214861 A CN 202110214861A CN 112949654 A CN112949654 A CN 112949654A
Authority
CN
China
Prior art keywords
feature map
probability
map
feature
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110214861.XA
Other languages
Chinese (zh)
Inventor
孙辉
韩泓泽
刘星龙
黄宁
张少霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202110214861.XA priority Critical patent/CN112949654A/en
Publication of CN112949654A publication Critical patent/CN112949654A/en
Priority to PCT/CN2021/117801 priority patent/WO2022179083A1/en
Priority to JP2022549312A priority patent/JP2023518160A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application discloses an image detection method and a related device and equipment, wherein the image detection method comprises the following steps: acquiring a medical image to be detected; performing feature extraction on a medical image to be detected to obtain a first feature map with a plurality of dimensions; taking a first feature map of a preset dimension as a reference feature map, and generating a lesion probability map by using the reference feature map, wherein the lesion probability map is used for representing the probability that different regions in the medical image to be detected belong to lesions; fusing the focus probability graph with the first feature graphs of a plurality of dimensions to obtain a final fused feature graph; and detecting the final fusion characteristic diagram to obtain a detection result about the focus in the medical image to be detected. According to the scheme, the accuracy of image detection can be improved.

Description

Image detection method and related device and equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image detection method, and a related apparatus and device.
Background
Medical images such as CT (Computed Tomography) are clinically significant. For example, a doctor can find organ lesions such as pneumonia through medical images. However, the accuracy of the manual examination of the lesion of the organ is often extremely dependent on the experience of the physician. In addition, when cases to be examined are increased rapidly due to infectious diseases, it is inevitable that energy is reduced with an increase in workload, thereby affecting examination accuracy.
With the development of information technology, electronic devices with processing capabilities, such as computers, are gradually replacing manual tasks in various industries. In the field of clinical application, medical images are detected by using electronic equipment, so that detection results about lesions in the medical images are obtained, and doctors are assisted in clinic. However, the existing image detection technology has the problem of low accuracy. In view of the above, how to improve the accuracy of image detection is an urgent problem to be solved.
Disclosure of Invention
The application provides an image detection method and a related device and equipment.
A first aspect of the present application provides an image detection method, including: acquiring a medical image to be detected; performing feature extraction on a medical image to be detected to obtain a first feature map with a plurality of dimensions; taking a first feature map of a preset dimension as a reference feature map, and generating a lesion probability map by using the reference feature map, wherein the lesion probability map is used for representing the probability that different regions in the medical image to be detected belong to lesions; fusing the focus probability graph with the first feature graphs of a plurality of dimensions to obtain a final fused feature graph; and detecting the final fusion characteristic diagram to obtain a detection result about the focus in the medical image to be detected.
Therefore, the acquired medical image to be detected is subjected to feature extraction to obtain a first feature map with a plurality of dimensions, the first feature map with preset dimensions is used as a reference feature map to generate a focus probability map, the focus probability map is used for representing the probability that different regions in the medical image to be detected belong to focuses, the focus probability map is fused with the first feature maps with the plurality of dimensions to obtain a final fused feature map, and therefore the focus probability map can be used as a global feature to be fused with the first feature map, the specificity of the focuses can be strengthened by the final fused feature map, and the accuracy of image detection can be improved when the detection result of the focuses in the medical image to be detected is obtained through detection processing of the final fused feature map.
Before generating the lesion probability map by using the reference feature map, the method further comprises the following steps: performing prediction processing by using the reference characteristic diagram to obtain a first probability value containing a focus in the medical image to be detected; based on the first probability value, it is determined whether to perform the step of generating a lesion probability map using the reference feature map and subsequent steps.
Therefore, the reference feature map is used for prediction processing to obtain a first probability value of the medical image to be detected, wherein the first probability value contains a focus, and whether the step of generating the focus probability map by using the reference feature map and the subsequent steps are executed or not is determined based on the first probability value, so that the false positive detection result obtained by detection when the medical image to be detected does not contain the focus can be avoided, the accuracy of image detection can be further improved, and the negative data can be screened out in advance before detection, so that the efficiency of image detection can be improved.
Wherein, based on the first probability value, determining whether to execute the step of generating the lesion probability map by using the reference feature map and the subsequent steps comprises: if the first probability value meets a first preset condition, executing a step of generating a focus probability map by using the reference feature map and subsequent steps; or, under the condition that the medical image to be detected is a two-dimensional medical image contained in the three-dimensional medical image; determining whether to perform the step of generating a lesion probability map using the reference feature map and subsequent steps based on the first probability value, including: sorting the first probability values corresponding to the two-dimensional medical images according to the sequence from big to small, and selecting the first probability values with the preset number; presetting a preset number of first probability values to obtain a second probability value; and if the second probability value meets a second preset condition, executing the step of generating the focus probability map by using the reference feature map and the subsequent steps.
Therefore, when the first probability value meets a first preset condition, the step of generating the focus probability map by using the reference feature map and subsequent steps are executed, or, in the case that the medical image to be detected is a two-dimensional medical image contained in the three-dimensional medical image, the first probability values of the two-dimensional medical image are sorted in the descending order, the first probability values of the first preset number are selected, the first probability values of the preset number are subjected to preset processing to obtain a second probability value, and when the second probability value meets a second preset condition, the step of generating the focus probability map by using the reference feature map and the subsequent steps are executed, so that negative data can be screened out in advance before detection, and the accuracy and the efficiency of image detection are improved.
Wherein, the first preset condition comprises: the first probability value is greater than or equal to a first probability threshold; the second preset condition includes: the second probability value is greater than or equal to a second probability threshold; the predetermined process is an averaging operation.
Therefore, by setting the first preset condition to be that the first probability value is greater than or equal to the first probability threshold, setting the second preset condition to be that the second probability value is greater than or equal to the second probability threshold, and setting the preset processing as an average operation, the calculation amount of the second probability value can be reduced, and the second probability value can accurately reflect the possibility that the three-dimensional medical image contains the lesion.
And if the first probability value does not meet the first preset condition or the second probability value does not meet the second preset condition, determining that the medical image to be detected does not contain the focus.
Therefore, when the first probability value does not satisfy the first preset condition or the second probability value does not satisfy the second preset condition, it is determined that the medical image to be detected does not contain a focus, so that the user can timely perceive a negative detection result of the medical image to be detected, and the user experience can be favorably improved.
Wherein, generating a lesion probability map by using the reference feature map comprises: and counting gradient values of all pixel points in the reference feature map about the focus to generate a class activation map as a focus probability map.
Therefore, the class activation graph is generated by counting the gradient value of each pixel point in the reference feature graph about the focus to serve as the focus probability graph, so that the accuracy of the focus probability graph can be improved, and the accuracy of subsequent image detection can be improved.
The method comprises the following steps of fusing a focus probability graph with first feature graphs of a plurality of dimensions to obtain a final fusion feature graph, wherein the final fusion feature graph comprises the following steps: coding the reference characteristic graph by using the focus probability graph to obtain a second characteristic graph; and fusing the second feature map and the first feature maps with a plurality of dimensions to obtain a final fused feature map.
Therefore, the reference feature map is encoded by using the lesion probability map to obtain a second feature map, and the second feature map is fused with the first feature maps with a plurality of dimensions to obtain a final fusion feature map, so that the lesion probability map can be used as a global feature to participate in feature map fusion, the specificity of the final fusion feature map on a lesion can be strengthened, and the accuracy of subsequent image detection can be improved.
Wherein, the encoding processing is carried out on the reference characteristic diagram by utilizing the focus probability diagram to obtain a second characteristic diagram, and the method comprises the following steps: and multiplying the pixel value of a first pixel point in the lesion probability map by the pixel value of a second pixel point corresponding to the first pixel point in the reference characteristic map to obtain the pixel value of the corresponding pixel point of the second characteristic map.
Therefore, the pixel value of the corresponding pixel point of the second characteristic map is obtained by multiplying the pixel value of the first pixel point in the focus probability map with the pixel value of the second pixel point corresponding to the first pixel point in the reference characteristic map, so that the coding processing of the focus probability map on the reference characteristic map is realized, and the reduction of the calculated amount can be facilitated.
And fusing the second feature diagram with the first feature diagrams of a plurality of dimensions to obtain a final fused feature diagram, wherein the method comprises the following steps: and according to the order of the dimensions from top to bottom, fusing the second feature graph with the first feature graph of each dimension ordered according to the order to obtain a final fused feature graph.
Therefore, the second feature map and the first feature map of each dimension are fused according to the order of the dimensions from high to low to obtain the final fused feature map, and the feature map fusion can be favorably carried out dimension by dimension, so that the context information can be favorably and fully fused, the accuracy and feature richness of the final fused feature map are improved, and the accuracy of subsequent image detection can be favorably improved.
Wherein, the reference characteristic diagram is a first characteristic diagram with the highest dimension; according to the order of the dimensions from top to bottom, the second feature graph is fused with the first feature graph of each dimension which is sequenced according to the order, and the final fused feature graph is obtained, and the method comprises the following steps: fusing the reference feature map and the first low-dimensional feature map to obtain a first fused feature map with the same dimension as that of the first low-dimensional feature map, wherein the first low-dimensional feature map is a first feature map with one dimension lower than that of the reference feature map; fusing the second feature map and the first fused feature map to obtain a second fused feature map with the same dimension as the first fused feature map; repeatedly fusing the second fused feature map and the second low-dimensional feature map to obtain a new second fused feature map with the same dimension as that of the second low-dimensional feature map until the first feature maps with a plurality of dimensions are fused; the second low-dimensional feature map is a first feature map which is lower than the current second fused feature map by one dimension; and taking the second fusion feature map obtained by final fusion as a final fusion feature map.
Therefore, a first fused feature map with the same dimension as the first low-dimensional feature map is obtained by fusing the reference feature map and the first low-dimensional feature map, the first low-dimensional feature map is the first feature map with a dimension lower than the reference feature map, and the second feature map is fused with the first fused feature map, so as to obtain a second fused feature map with the same dimension as the first fused feature map, so that the second fused feature map and the second low-dimensional feature map are repeatedly fused to obtain a new second fused feature map with the same dimension as the second low-dimensional feature map until the first feature maps with a plurality of dimensions are fused, the second low-dimensional feature map is the first feature map with a dimension lower than the current second fused feature map, the finally fused second fused feature map is used as a final fused feature map, and the lesion probability map is used as a global feature to be coupled with the decoding process of image detection, the final fusion feature map can strengthen the specificity to the focus, and can fully fuse the context information of the feature map, improve the accuracy and feature richness of the final fusion feature map, and further be beneficial to improving the accuracy of subsequent image detection.
Wherein, the detection result comprises a detection area of a focus in the medical image to be detected; the method further comprises the following steps: performing organ detection on the medical image to be detected to obtain an organ area in the medical image to be detected; and acquiring the proportion of the lesion in the organ area of the lesion detection area.
Therefore, the organ area in the medical image to be detected is obtained by performing organ detection on the medical image to be detected, the lesion proportion of the lesion detection area in the organ area is obtained, and the reference information favorable for clinic can be further generated by using the detection result, so that the user experience can be improved.
Before feature extraction is performed on the medical image to be detected to obtain first feature maps of a plurality of dimensions, the method further comprises the following steps: preprocessing a medical image to be detected, wherein the preprocessing operation at least comprises the following steps: and normalizing the pixel value of the medical image to be detected to be within a preset range by utilizing the preset window value.
Therefore, before feature extraction is performed on the medical image to be detected, the medical image to be detected is preprocessed, and the preprocessing operation at least includes: the pixel value of the medical image to be detected is normalized to the preset range by utilizing the preset window value, so that the contrast of the medical image to be detected can be favorably enhanced, and the accuracy of the subsequently extracted first characteristic diagram can be favorably improved.
Wherein, carry out feature extraction to the medical image that awaits measuring, obtain the first feature map of a plurality of dimension, include: performing feature extraction on the medical image to be detected by using a feature extraction sub-network of the image detection model to obtain a first feature map with a plurality of dimensions; fusing the focus probability graph with the first feature graphs of a plurality of dimensions to obtain a final fused feature graph, and detecting the final fused feature graph to obtain a detection result about the focus in the medical image to be detected, wherein the detection result comprises the following steps: fusing the focus probability graph and the first feature graphs of a plurality of dimensions by utilizing a fusion processing sub-network of the image detection model to obtain a final fusion feature graph; and detecting the final fusion characteristic graph by using a fusion processing sub-network of the image detection model to obtain a detection result about the focus in the medical image to be detected.
Therefore, the feature extraction is carried out on the medical image to be detected by using the feature extraction sub-network of the image detection model to obtain the first feature maps with a plurality of dimensions, the focus probability map is fused with the first feature maps with the plurality of dimensions by using the fusion processing sub-network of the image detection model to obtain the final fusion feature map, and the final fusion feature map is detected by using the fusion processing sub-network of the image detection model to obtain the detection result about the focus in the medical image to be detected, so that the feature extraction, the fusion processing and the image detection task are executed by using the image detection model, and the image detection efficiency can be improved.
Before feature extraction is performed on the medical image to be detected by using the feature extraction sub-network of the image detection model to obtain the first feature maps of a plurality of dimensions, the method further comprises the following steps: acquiring a sample medical image, wherein the sample medical image comprises an actual region of a focus; performing feature extraction on the sample medical image by using a feature extraction sub-network to obtain a first sample feature map with a plurality of dimensions; taking a first sample feature map with a preset dimension as a reference sample feature map, and generating a lesion sample probability map by using the reference sample feature map, wherein the lesion sample probability map is used for representing the probability that different regions of a sample medical image belong to a lesion; fusing the focus sample probability graph and the first sample characteristic graphs of a plurality of dimensions by utilizing a fusion processing sub-network to obtain a final fusion sample characteristic graph; detecting the final fusion sample characteristic diagram by using a fusion processing sub-network to obtain a detection area about a focus in a sample medical image; and adjusting the network parameters of the image detection model by using the difference between the actual area and the detection area.
Therefore, by obtaining a sample medical image which contains an actual region of a focus and utilizing a feature extraction sub-network to perform feature extraction on the sample medical image to obtain a first sample feature map with a plurality of dimensions, taking the first sample feature map with preset dimensions as a reference sample feature map, generating a focus sample probability map by utilizing the reference sample feature map, wherein the focus sample probability map is used for representing the probability that different regions of the sample medical image belong to the focus, and then fusing the focus probability map and the first sample feature maps with the plurality of dimensions to obtain a final fused sample feature map, so as to perform detection processing on the final fused sample feature map to obtain a detection region of the focus in the sample medical image, and utilizing the difference between the actual region and the detection region to adjust the network parameters of an image detection model, so that in the training process of the image detection model, the focus sample probability graph is used as a global feature to be coupled with the decoding process of image detection, so that the specificity of the final fused sample feature graph to the focus can be strengthened, the sensitivity of an image detection model to the focus can be enhanced, and the training speed of the model can be improved.
Wherein, using the difference between the actual area and the detection area, adjusting the network parameters of the feature extraction sub-network and the fusion processing sub-network comprises: processing the actual region and the detection region by adopting a set similarity loss function to determine a loss value of the image detection model; and adjusting the network parameters of the image detection model by using the loss value at a preset learning rate.
Therefore, the actual region and the detection region are processed by using the set similarity loss function, the loss value of the image detection model is determined, and the accuracy of the loss value can be ensured, so that the network parameters of the image detection model are adjusted by using the loss value pair at a preset learning rate, the difference between the detection region and the actual region can be reduced in the training process, and the accuracy of the image detection model is improved.
The second aspect of the application provides an image detection device, which comprises an image acquisition module, a feature extraction module, an image generation module, an image fusion module and an image detection model, wherein the image acquisition module is used for acquiring a medical image to be detected; the characteristic extraction module is used for extracting characteristics of the medical image to be detected to obtain a first characteristic diagram with a plurality of dimensions; the image generation module is used for generating a focus probability map by using a first feature map of a preset dimension as a reference feature map and using the reference feature map, wherein the focus probability map is used for representing the probability that different regions of the medical image to be detected belong to focuses; the image fusion module is used for fusing the focus probability graph with the first feature graphs of a plurality of dimensions to obtain a final fusion feature graph; and the detection processing module is used for detecting and processing the final fusion characteristic graph to obtain a detection result about the focus in the medical image to be detected.
A third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the pattern detection method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the pattern detection method of the first aspect described above.
According to the scheme, the acquired medical image to be detected is subjected to feature extraction to obtain the first feature maps with a plurality of dimensions, the first feature maps with preset dimensions are used as reference feature maps to generate the focus probability maps, the focus probability maps are used for representing the probability that different regions of the medical image to be detected belong to focuses, the focus probability maps and the first feature maps with the plurality of dimensions are fused to obtain the final fused feature maps, so that the focus probability maps can be used as global features to be fused with the first feature maps, the specificity of the focuses can be strengthened by the final fused feature maps, and the accuracy of image detection can be improved when the detection results of the focuses in the medical image to be detected are obtained through detection processing of the final fused feature maps.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an image detection method according to the present application;
FIG. 2 is a block diagram of an embodiment of an image inspection model;
FIG. 3 is a schematic flow chart diagram illustrating another embodiment of an image detection method according to the present application;
FIG. 4 is a schematic flow chart diagram of an embodiment of training an image detection model;
FIG. 5 is a block diagram of an embodiment of an image detection apparatus according to the present application;
FIG. 6 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 7 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of an image detection method according to the present application. Specifically, the method may include the steps of:
step S11: and acquiring a medical image to be detected.
The medical image to be measured may include a CT image, an MR (Magnetic Resonance) image, and is not limited herein. In an implementation scenario, the medical image to be measured may be an image obtained by scanning a lung region, a liver region, a heart region, and the like, which is not limited herein and may be set specifically according to an actual application situation. For example, when the lungs need to be examined to screen for pneumonia, the lung area can be scanned; alternatively, when the liver needs to be examined to screen whether the liver is diseased, the liver region may be scanned, and so on, and other applications may be similar, which is not exemplified herein.
In one implementation scenario, the medical image under test may be a two-dimensional medical image; in another implementation scenario, the medical image to be detected may also be a two-dimensional medical image included in a three-dimensional medical image, for example, if the three-dimensional CT data is obtained by performing CT scanning on the scanning object, the medical image to be detected may be a two-dimensional medical image included in the three-dimensional CT data.
Step S12: and performing feature extraction on the medical image to be detected to obtain a plurality of dimensionalities of first feature maps.
In an implementation scenario, before feature extraction, the medical image to be detected may be preprocessed, for example, the pixel value of the medical image to be detected is normalized to a preset range by using at least a preset window value, so that the contrast of the medical image to be detected may be enhanced, and the accuracy of the extracted first feature map may be improved. Specifically, the preset window value may be set according to a scanning location, for example, when the medical image to be measured is a CT image obtained by scanning a lung, the preset window value may be-1400 to 100 Hounsfield Unit (HU), and other locations may be set according to actual conditions, which is not illustrated herein. In addition, the preset range may be set to 0 to 1, so that when the medical image to be measured is a CT image obtained by scanning the lung and the preset window value is-1400 to 100 hounsfield units, the pixel value below-1400 may be set to-1400, the pixel value above 100 may be set to 100, and finally the pixel value in the range of-1400 to 100 may be mapped to the range of 0 to 1. When the preset window value and the preset range are other values, the analogy can be performed, and the examples are not repeated.
In an implementation scenario, in order to improve convenience of feature extraction, an image detection model may be trained in advance, and the image detection model may include a feature extraction sub-network, so that feature extraction may be performed on a medical image to be detected by using the feature extraction sub-network of the image detection model to obtain first feature maps of a plurality of dimensions. The plurality of dimensions may be one dimension or a plurality of dimensions, for example, two dimensions, three dimensions, and the like, which are not limited herein, and the network depth of the feature extraction sub-network is set according to the actual situation, so as to obtain the first feature maps with different dimensions. The higher the dimension is, the larger the number of channels of the corresponding first feature map is, and the smaller the resolution is.
Referring to fig. 2, fig. 2 is a schematic diagram of a framework of an embodiment of an image detection model. As shown in fig. 2, the feature extraction sub-network may include a plurality of sequentially connected feature extraction sub-modules, and the feature extraction sub-modules may perform processing tasks such as convolution, regularization, activation, and pooling, and specifically, the feature extraction sub-modules may include any one of a Residual Block (Residual Block), a perception Block (inclusion Block), and a Dense Block (density Block), which is not limited herein. Specifically, the Pooling process may include any one of Max Pooling (Max Pooling), Average Pooling (Average Pooling), and convolutional layer with step size (Stride) of 2, which is not limited herein. In the process of executing feature extraction by the sequentially connected feature extraction sub-modules, the first feature maps with low dimensionality and high dimensionality can be sequentially obtained, and when the feature extraction sub-modules are executed, the number of channels is doubled and the resolution is reduced by half. For example, the resolution of the medical image to be measured is 256 × 256, the number of channels of the first feature map extracted by the first feature extraction submodule in fig. 2 is 64, and the resolution is 128 × 128, for convenience of description, the size of the first feature map is uniformly expressed as the number of channels × resolution, that is, 64 × 128, the size of the first feature map extracted by the second feature extraction submodule connected in series is 128 × 64, the size of the first feature map extracted by the third feature extraction submodule is 256 × 32, and the size of the first feature map extracted by the fourth feature extraction submodule is 512 × 16. When the feature extraction sub-network is in other structures, the analogy can be repeated, and no one example is given here.
Step S13: and taking the first feature map of the preset dimension as a reference feature map, and generating a lesion probability map by using the reference feature map.
The lesion probability map is used for representing the probability that different regions in the medical image to be detected belong to the lesion. Still taking the resolution of the medical image to be detected as 256 × 256 as an example, the reference feature map may include a first feature map with the highest dimension, that is, the first feature map with the size of 512 × 16, and each pixel point in the reference feature map may correspond to a region representing 16 × 16 in the medical image to be detected, so that the pixel value of each pixel point in the lesion probability map generated by using the reference feature map may represent the probability that a region 16 × 16 in the medical image to be detected belongs to a lesion. In addition, the reference feature map may also include the first feature map with other dimensions, for example, the reference feature map may also include the first feature map with one dimension lower than the first feature map with the highest dimension, which is not limited herein. In other implementation scenarios, the reference feature map may also select the first feature map with other dimensions according to a specific application, which is not limited herein.
In an implementation scenario, in order to improve the accuracy of the lesion probability Map, a Class Activation Map (CAM) may be generated by statistically referring to gradient values of each pixel point in the feature Map about a lesion, so as to serve as the lesion probability Map. In a specific implementation scenario, the reference feature map may be used to perform prediction processing to obtain a first probability value y including a lesion in the medical image to be detectedcThereby calculating a first probability value ycFor each pixel point of the reference feature map
Figure BDA0002952805150000101
Gradient value of
Figure BDA0002952805150000102
Wherein the content of the first and second substances,
Figure BDA0002952805150000103
k in the graph indicates the kth reference feature map in the reference feature maps of the channels, and ij indicates the ith row and the jth column of pixel points in the kth reference feature map. Specifically, an image detection model may be trained in advance, and the image detection model includes a sub-prediction processing network, so that the sub-prediction processing network may be used to perform prediction processing on the reference feature map, and a first probability value of the medical image to be detected including the lesion is obtained. Referring to fig. 2, the sub-network of prediction processing may include a Global Average Pooling and a fully-connected layer, and taking 256 × 256 resolution of the medical image to be measured as an example, Global Average Pooling (GAP) processing is performed on the reference feature map with a size of 512 × 16 to obtain a vector with a size of 512 × 1, and the fully-connected layer is used to process the vector to obtain a first probability value of the medical image to be measured including the lesion.
In an implementation scenario, in order to screen out negative data in advance before detection and avoid that a false positive result is obtained by subsequent detection, that is, a focus does not exist in a medical image to be detected, but the focus is obtained by detecting the medical image to be detected, and the false positive result interferes with clinical application, a reference feature map may be used for prediction processing to obtain a first probability value of the medical image to be detected, which includes the focus, and whether to perform a step of generating a focus probability map using the reference feature map and subsequent steps are determined based on the first probability value. In a specific implementation scenario, the manner of obtaining the first probability value by using the reference feature map for the prediction processing may specifically refer to the related description in the foregoing implementation scenario, and is not described herein again. In another specific implementation scenario, a probability threshold may be preset, and when the first probability value is lower than the probability threshold, it may be determined that the medical image to be detected does not include a lesion, and the step of generating a lesion probability map by using the reference feature map and subsequent steps may not be performed, so that the possibility that the medical image to be detected is still subjected to image detection to obtain a false positive result under the condition may be greatly reduced. The probability threshold may be set according to actual application conditions, for example, the probability threshold may be set to a value lower than 20%, 30%, and the like, so that the medical image to be detected, which does not include the lesion with a higher probability, is not subjected to subsequent detection, thereby improving the detection efficiency and reducing the possibility of false positive results.
Step S14: and fusing the focus probability graph and the first feature graphs of a plurality of dimensions to obtain a final fused feature graph.
Specifically, in the fusion process, the lesion probability map and the first feature maps of a plurality of dimensions may be fused in the order from high to low in the dimensions, so as to obtain a final fusion feature map. In an implementation scenario, the reference feature map may be encoded by using a lesion probability map to obtain a second feature map, and the second feature map is fused with the first feature maps of a plurality of dimensions to obtain a final fusion feature map, so that the lesion probability map is used as a global feature to participate in feature map fusion, and the final fusion feature map can reinforce specificity to a lesion, thereby being beneficial to improving accuracy of subsequent image detection. Taking 256 × 256 as an example of the resolution of the medical image to be measured, the reference feature map is 512 × 16, and the lesion probability map corresponding to the reference feature map is obtained for each of 512 channels, so that the lesion probability map is 512 × 16, and further the lesion probability map with 512 × 16, the reference feature map with 512 × 16, the first feature map with 256 × 32, the first feature map with 128 × 64, and the first feature map with 64 × 128 may be fused to obtain the final fused feature map with 1 channel number.
In a specific implementation scenario, in order to reduce the calculation amount of the encoding process, the pixel value of a first pixel point in the lesion probability map may be directly multiplied by the pixel value of a second pixel point corresponding to the first pixel point in the reference feature map, so as to obtain the pixel value of a corresponding pixel point of the second feature map. Still taking 256 × 256 as an example of the resolution of the medical image to be measured, through the above processing, a reference feature map with a size of 512 × 16 and a lesion probability map with a size of 512 × 16 corresponding to the reference feature map can be obtained, so for each channel, the pixel value of a first pixel point in the lesion probability map can be multiplied by the pixel value of a second pixel point corresponding to the first pixel point in the reference feature map of the corresponding channel to obtain the pixel value of a corresponding pixel point of the second feature map of the corresponding channel, and through the same processing on 512 channels, a second feature map with a size of 512 × 16 can be obtained. When the medical image to be detected is an image with other resolution, or when the reference feature map is an image with other size, the analogy can be performed, and no example is given here.
In another specific implementation scenario, in order to improve the accuracy and richness of the final fused image, the second feature map and the first feature map of each dimension ordered according to the above sequence may be fused in the order from high to low in the dimensions to obtain a final fused feature map, which may be beneficial to fully fusing context information. Specifically, the reference feature map and the first low-dimensional feature map may be fused to obtain a first fused feature map having the same dimension as the first low-dimensional feature map, where the first low-dimensional feature map is a first feature map having a first dimension lower than the reference feature map, and the second feature map and the first fused feature map are fused to obtain a second fused feature map having the same dimension as the first fused feature map, and then the second fused feature map and the second low-dimensional feature map are repeatedly fused to obtain a new second fused feature map having the same dimension as the second low-dimensional feature map until the first feature maps having a plurality of dimensions are fused, where the second low-dimensional feature map is a first feature map having a dimension lower than the current second fused feature map, and the finally fused second fused feature map may be used as a final fused feature map. Still taking the resolution of the medical image to be measured as 256 × 256 as an example, the reference feature map with the size of 512 × 16 and the corresponding first low-dimensional feature map, that is, the first feature map with the size of 256 × 32 may be fused, in the fusion process, the number of channels of the reference feature map with the size of 512 × 16 may be reduced by half, and the resolution may be multiplied, so that the size of the reference feature map is adjusted to be the same as that of the corresponding first low-dimensional feature map, and then the adjusted reference feature map and the first low-dimensional feature map with the size of 256 × 32 may be merged into the feature map with the size of 512 × 32, and for the convenience of subsequent fusion, the number of channels may be reduced by half, so that the first fused feature map with the same size as that of the first low-dimensional feature map, that is, the first feature map with the size of 256 × 32 is obtained. And then, fusing the second feature map with the size of 512 × 16 with the first fused feature map, similarly, in the fusing process, firstly, halving the number of channels of the second feature map with the size of 512 × 16 and multiplying the resolution, so as to adjust the size of the second feature map to be the same as that of the first fused feature map, and then merging the adjusted second feature map with the first fused feature map with the size of 256 × 32 into the feature map with the size of 512 × 32, and for convenience of subsequent fusing, halving the number of channels of the second feature map to obtain the second fused feature map with the same size as that of the first fused feature map, namely the second fused feature map with the size of 256 × 32. Similarly, in the fusion process, the number of channels of the second fused feature map with the size of 256 × 32 is reduced by half, and the resolution is multiplied, so that the second fused feature map with the size of 256 × 32 is adjusted to be consistent with the corresponding second low-dimensional feature map, and then the adjusted second fused feature map and the corresponding second low-dimensional feature map (i.e., the second feature map with the size of 128 × 64) are merged into the feature map with the size of 256 × 64, and in the subsequent fusion process, the number of channels of the second fused feature map is reduced by half, so that a new second fused feature map with the same size as the corresponding second low-dimensional feature map, that is, the second feature map with the size of 128 × 64, is obtained. Then, the step of fusing the second fused feature map with the size of 128 × 64 and the corresponding second low-dimensional feature map (i.e. the second feature map with the size of 64 × 128) is executed, similarly, in the fusing process, the number of channels of the second fused feature map with the size of 128 × 64 is first reduced by half, and the resolution is multiplied, so that the second fused feature map is resized to be consistent with the corresponding second low-dimensional feature map, then the resized second fused feature map and the corresponding second low-dimensional feature map (i.e. the second feature map with the size of 64 × 128) are merged into the feature map with the size of 128 × 128, and in the subsequent processing, the number of channels is also reduced by half, so that a new second fused feature map with the same size as the corresponding second low-dimensional feature map is obtained, that is the second feature map with the size of 64 × 128, and because of the first feature maps are fused, all the first fused features are completely fused, the final fused second feature map with a size of 64 × 128 may be used as the final fused feature map. Other cases may be analogized, and no one example is given here.
In another specific implementation scenario, in order to improve the efficiency of the fusion process, an image detection model may be trained in advance, where the image detection model includes a sub-fusion processing network, so that the sub-fusion processing network of the image detection model is used to fuse the lesion probability map with the first feature maps of several dimensions, thereby obtaining a final fusion feature map. Specifically, please refer to fig. 2 in combination, as shown in fig. 2, the sub-network of fusion processing includes a plurality of sequentially connected sub-modules of fusion processing, which are used to perform the step of fusing the lesion probability map with the first feature maps of several dimensions to obtain the final fusion feature map. Each fusion processing submodule may perform processing operations such as upsampling, convolution, regularization, activation, merging, convolution, regularization, activation, and so on. Taking the example of the fusion process of the higher-dimensional feature map with the size of 512 × 16 and the corresponding low-dimensional feature map with the size of 256 × 32 in the foregoing implementation scenario as an example, the upsampling process is used to multiply the resolution of the higher-dimensional feature map with the size of 512 × 16 in the fusion process, so as to obtain the feature map with the size of 512 × 32; the first convolution process is used to halve the number of channels in the higher-dimensional feature map (i.e., the feature map with the size of 512 × 32) after resolution multiplication in the fusion process, and obtain a feature map with the same size as the corresponding lower-dimensional feature map (i.e., the feature map with the size of 256 × 32); the merging process is used for merging the adjusted higher-dimensional feature maps and the corresponding low-dimensional feature maps in the merging process to multiply the number of channels of the higher-dimensional feature maps, namely, the size of the merged feature maps is 512 x 32, and the second convolution process is used for halving the merged feature maps obtained by doubling the number of channels again in the merging process, so that the size of the merged feature maps obtained by the current merging is the same as that of the corresponding low-dimensional feature maps, namely, the merged feature maps with the size of 256 x 32 are obtained. Reference may be specifically made to relevant steps in the foregoing implementation scenarios, which are not described herein again.
Step S15: and detecting the final fusion characteristic diagram to obtain a detection result about the focus in the medical image to be detected.
In one implementation scenario, the detection result may include a detection region of a lesion in the image to be detected for the convenience of the doctor to view. Specifically, the detection area may be represented by a preset color and a preset line type, which are not limited herein. In another implementation scenario, the medical image to be detected is a two-dimensional medical image included in the three-dimensional medical image, so that the detection region of the lesion in the three-dimensional medical image can be obtained by using the detection region detected in the two-dimensional medical image, for example, the detection region detected in the two-dimensional medical image can be fused in a three-dimensional space in a stacking manner, so as to obtain the detection region of the lesion in the three-dimensional medical image, which can be specifically set according to practical applications, and is not limited herein.
In another implementation scenario, in order to improve the efficiency of the detection process, an image detection model may be trained in advance, and the image detection model may include a sub-fusion processing network, so that the sub-fusion processing network of the image detection model may be used to perform a detection process on the final fusion feature map, thereby obtaining a detection result about a lesion in the medical image to be detected. Specifically, please refer to fig. 2 in combination, the sub-network of fusion processing may further include an activation processing sub-module connected to the last sub-module of the plurality of sub-modules of fusion processing connected in sequence, in addition to the plurality of sub-modules of fusion processing connected in sequence, where the activation processing sub-module is configured to convolve and activate the final fusion feature image to obtain a feature map with a channel number of 1, and normalize the feature map to obtain a detection result about the lesion. Specifically, the activation processing sub-module may include a convolution layer and an activation layer that are sequentially connected, and the activation layer may use a sigmoid activation function, which may be specifically set according to an actual application situation, which is not limited herein.
In another implementation scenario, in order to provide clinical reference, the organ detection may be performed on the medical image to be detected to obtain an organ region in the medical image to be detected, so that a lesion ratio of a lesion detection region in the organ region may be obtained, thereby providing reference information beneficial to clinical practice for a doctor and improving user experience. For example, the lung detection may be performed on the medical image to be detected to obtain a lung lobe region in the medical image to be detected, so that the lesion proportion of the lesion detection region in the lung lobe region may be obtained. Other application scenarios may be analogized, and are not limited herein. Specifically, in order to improve the efficiency of organ detection, an organ detection model may be trained in advance, so that the organ detection model may be used to perform organ detection on the medical image to be detected, and an organ region in the medical image to be detected is obtained. In particular, the organ detection model may be based on U-net, FCN (full volume Networks), and the like, and is not limited herein.
According to the scheme, the acquired medical image to be detected is subjected to feature extraction to obtain the first feature maps with a plurality of dimensions, the first feature maps with preset dimensions are used as reference feature maps to generate the focus probability maps, the focus probability maps are used for representing the probability that different regions of the medical image to be detected belong to focuses, the focus probability maps and the first feature maps with the plurality of dimensions are fused to obtain the final fused feature maps, so that the focus probability maps can be used as global features to be fused with the first feature maps, the specificity of the focuses can be strengthened by the final fused feature maps, and the accuracy of image detection can be improved when the detection results of the focuses in the medical image to be detected are obtained through detection processing of the final fused feature maps.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an image detection method according to another embodiment of the present application. Specifically, the method may include the steps of:
step S31: and acquiring a medical image to be detected.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S32: and performing feature extraction on the medical image to be detected to obtain a plurality of dimensionalities of first feature maps.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S33: and taking the first feature map with preset dimensionality as a reference feature map, and performing prediction processing by using the reference feature map to obtain a first probability value containing the focus in the medical image to be detected.
Specifically, an image detection model may be trained in advance, and the image detection model includes a sub-prediction processing network, so that the sub-prediction processing network may be used to perform prediction processing on the reference feature map, and a first probability value of the medical image to be detected including the lesion is obtained. Specifically, reference may be made to the relevant steps in the foregoing embodiments, which are not described herein again.
Step S34: and determining whether to perform the step of generating a lesion probability map using the reference feature map and subsequent steps based on the first probability value, if so, performing step S35, otherwise, performing step S38.
In one implementation scenario, when the first probability value satisfies a first preset condition, it may be determined to perform the step of generating the lesion probability map using the reference feature map and subsequent steps. Specifically, the first preset condition may include that the first probability value is greater than or equal to a first probability threshold, and the first probability threshold may be set according to practical applications, for example, may be set to 15%, 20%, 25%, and the like, and is not limited herein.
In another implementation scenario, the medical image to be measured is a two-dimensional medical image included in the three-dimensional medical image, so that the first probability values of the two-dimensional medical image may be sorted in a descending order, and a preset number of first probability values is selected, where the preset number may be set according to an actual situation, for example, 5, 6, and the like, and is not limited herein. And performing preset processing on a preset number of first probability values, for example, performing average operation on the preset number of first probability values to obtain a second probability value, and when the second probability value satisfies a second preset condition, determining to perform the step of generating the lesion probability map by using the reference feature map and subsequent steps. Specifically, the second preset condition may include that the second probability value is greater than or equal to a second probability threshold, and the second probability threshold may be set according to the practical application, for example, may be set to 15%, 20%, 25%, and the like, and is not limited herein.
Step S35: and generating a lesion probability map by using the reference feature map.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S36: and fusing the focus probability graph and the first feature graphs of a plurality of dimensions to obtain a final fused feature graph.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S37: and detecting the final fusion characteristic diagram to obtain a detection result about the focus in the medical image to be detected.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S38: and prompting that the medical image to be detected does not contain the focus.
When it is determined based on the first probability value that the step of generating the lesion probability map using the reference feature map and subsequent steps are not performed, it may be determined that the lesion is not included in the medical image to be tested. In addition, the medical image to be detected of the medical staff can be prompted to contain no focus through the modes of characters, images, voice and the like, and the method is not limited herein.
Different from the foregoing embodiment, the first probability value that the medical image to be detected includes the lesion is obtained by performing the prediction processing using the reference feature map, and whether to execute the step of generating the lesion probability map using the reference feature map and the subsequent steps is determined based on the first probability value, so that it is possible to avoid obtaining a false positive detection result by detecting that the medical image to be detected does not include the lesion, which is beneficial to further improving the accuracy of image detection, and since negative data can be screened out in advance before detection, the efficiency of image detection can be improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of training an image detection model. Specifically, the image detection model may be trained before the feature extraction of the medical image to be detected is performed by using the feature extraction sub-network of the image detection model, and the network structure of the image detection model may refer to the foregoing embodiments, which are not described herein again. Specifically, the training step may include:
step S41: a sample medical image is acquired, wherein the sample medical image includes an actual region of the lesion.
The medical image of the sample may include a CT image, an MR (Magnetic Resonance) image, and is not limited herein. Specifically, the sample medical image may be an image obtained by scanning a lung region, a liver region, a heart region, and the like, which is not limited herein and may be set according to a practical application. In one implementation scenario, the sample medical image may be a two-dimensional medical image included in a three-dimensional medical image, for example, three-dimensional CT data obtained by CT scanning a scanned object, and the sample medical image may be a two-dimensional medical image included in the three-dimensional CT data.
In one implementation scenario, in order to improve sample diversity, data enhancement may also be performed on the sample medical image; in another implementation scenario, in order to improve the contrast of the sample medical image, the pixel values of the sample medical image may also be normalized to be within a preset range by using a preset window value. The specific setting manner of the preset window value and the preset range may refer to the related steps in the foregoing embodiments, and details are not described herein.
Step S42: and performing feature extraction on the sample medical image by using a feature extraction sub-network to obtain a first sample feature map with a plurality of dimensions.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S43: and taking the first sample feature map with preset dimensionality as a reference sample feature map, and generating a lesion sample probability map by using the reference sample feature map.
The lesion sample probability map is used to represent the probability that different regions in the sample medical image belong to a lesion. The manner of obtaining the lesion sample probability map may specifically refer to the steps of obtaining the lesion probability map in the foregoing embodiments, and will not be described herein again.
Step S44: and fusing the focus sample probability graph and the first sample characteristic graphs of a plurality of dimensions by utilizing a fusion processing sub-network to obtain a final fusion sample characteristic graph.
Specifically, reference may be made to the relevant steps in the foregoing embodiments, which are not described herein again.
Step S45: and detecting the final fusion sample characteristic graph by using a fusion processing sub-network to obtain a detection area about the focus in the sample medical image.
Specifically, reference may be made to the relevant steps in the foregoing embodiments, which are not described herein again.
Step S46: and adjusting the network parameters of the image detection model by using the difference between the actual area and the detection area.
In one implementation scenario, the actual region and the detected region may be processed by a sample set similarity loss function (Dice loss) to determine a loss value of the image detection model, so as to adjust a network parameter of the image detection model at a preset learning rate (e.g., 3e-4) by using the loss value. In another implementation scenario, the actual region and the detection region may be processed by using a cross entropy loss function (CE loss) to determine a loss value of the image detection model, so as to adjust a network parameter of the image detection model at a preset learning rate (e.g., 3e-4) by using the loss value. And are not limited herein. In another implementation scenario, referring to fig. 2, the image detection model further includes a sub-prediction processing network, where the sub-prediction processing network is configured to perform prediction processing on the reference sample feature map to obtain a prediction probability that the reference sample feature map includes a lesion, and in the training process, the sub-prediction processing network may further perform processing on the prediction probability by using a binary cross entropy loss function to determine a classification loss value of the image detection model, and perform weighting processing on a loss value and a classification loss value of the image detection model determined by processing the actual region and the detection region to obtain a weighted loss value of the image detection model, so as to adjust a network parameter of the image detection model by using the weighted loss value.
In an implementation scenario, a training end condition may be preset, and when the preset training end condition is satisfied, the training of the image detection model may be ended. Specifically, the training end condition may include: the loss value is smaller than a preset loss threshold, and the training frequency reaches any one of the preset frequency thresholds, which is not limited herein. Specifically, the preset loss threshold and the preset number threshold may be set according to actual conditions, for example, the preset number threshold may be set to 1000 times, 2000 times, and the like. And are not limited herein.
In one implementation scenario, the network parameters of the image detection model may be adjusted by using loss values in a Stochastic Gradient Descent (SGD), a Batch Gradient Descent (BGD), a small Batch Gradient Descent (mbi-Batch Gradient Descent, MBGD), or other manners. The batch gradient descent refers to updating parameters by using all samples during each iteration; the random gradient descent means that one sample is used for parameter updating in each iteration; the small batch gradient descent means that a batch of samples is used for parameter updating at each iteration, and details are not repeated here.
Different from the embodiment, the method comprises the steps of obtaining a sample medical image, wherein the sample medical image comprises an actual region of a focus, extracting features of the sample medical image by using a feature extraction sub-network to obtain a first sample feature map with a plurality of dimensions, generating a focus sample probability map by using the first sample feature map with preset dimensions as a reference sample feature map, wherein the focus sample probability map is used for representing the probability that different regions in the sample medical image belong to the focus, fusing the focus probability map and the first sample feature maps with the dimensions to obtain a final fused sample feature map, detecting the final fused sample feature map to obtain a detection region of the focus in the sample medical image, and adjusting network parameters of an image detection model by using the difference between the actual region and the detection region, therefore, in the training process of the image detection model, the focus sample probability graph is used as the global feature to be coupled with the decoding process of the image detection, so that the specificity of the final fused sample feature graph on the focus can be strengthened, the sensitivity degree of the image detection model on the focus can be enhanced, and the training speed of the model can be improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a framework of an embodiment of an image detection apparatus 50 according to the present application. The image detection device 50 comprises an image acquisition module 51, a feature extraction module 52, an image generation module 53, an image fusion module 54 and a detection processing module 55, wherein the image acquisition module 51 is used for acquiring a medical image to be detected; the feature extraction module 52 is configured to perform feature extraction on the medical image to be detected to obtain a first feature map with a plurality of dimensions; the image generation module 53 is configured to use the first feature map of the preset dimension as a reference feature map, and generate a lesion probability map by using the reference feature map, where the lesion probability map is used to represent probabilities that different regions of the medical image to be detected belong to a lesion; the image fusion module 54 is configured to fuse the lesion probability map with the first feature maps of the multiple dimensions to obtain a final fusion feature map; the detection processing module 55 is configured to perform detection processing on the final fusion feature map to obtain a detection result about a lesion in the medical image to be detected.
According to the scheme, the acquired medical image to be detected is subjected to feature extraction to obtain the first feature maps with a plurality of dimensions, the first feature maps with preset dimensions are used as reference feature maps to generate the focus probability maps, the focus probability maps are used for representing the probability that different regions of the medical image to be detected belong to focuses, the focus probability maps and the first feature maps with the plurality of dimensions are fused to obtain the final fused feature maps, so that the focus probability maps can be used as global features to be fused with the first feature maps, the specificity of the focuses can be strengthened by the final fused feature maps, and the accuracy of image detection can be improved when the detection results of the focuses in the medical image to be detected are obtained through detection processing of the final fused feature maps.
In some embodiments, the image detection apparatus 50 further includes a prediction processing module, configured to perform prediction processing by using the reference feature map, so as to obtain a first probability value of a medical image to be detected, where the medical image includes a lesion, and the image detection apparatus 50 further includes an execution determination module, configured to determine whether to execute the step of generating the lesion probability map by using the reference feature map and subsequent steps based on the first probability value.
Different from the foregoing embodiment, the first probability value of the medical image to be detected including the lesion is obtained by performing the prediction processing using the reference feature map, and whether to execute the step of generating the lesion probability map using the reference feature map and the subsequent steps is determined based on the first probability value, so that it is possible to avoid obtaining a false positive detection result by detecting when the medical image to be detected does not include the lesion, and further, it is possible to further improve the accuracy of image detection, and since negative data can be screened out in advance before detection, it is possible to improve the efficiency of image detection.
In some embodiments, the execution determination module is specifically configured to execute the step of generating the lesion probability map using the reference feature map and the subsequent steps when the first probability value satisfies a first preset condition.
In some embodiments, the execution determining module further includes a probability selecting sub-module configured to sort the first probability values of the two-dimensional medical images in descending order and select a preset number of first probability values, the execution determining module further includes a probability processing sub-module configured to perform preset processing on the preset number of first probability values to obtain a second probability value, and the execution determining module further includes a determination executing sub-module configured to execute the step of generating the lesion probability map by using the reference feature map and subsequent steps when the second probability value satisfies a second preset condition.
Different from the foregoing embodiment, when the first probability value satisfies the first preset condition, the step and the subsequent steps of generating the lesion probability map by using the reference feature map are performed, or when the medical image to be detected is a two-dimensional medical image included in the three-dimensional medical image, the first probability values of the two-dimensional medical image are sorted in a descending order, the first probability values of the first preset number are selected, and the preset processing is performed on the first probability values of the preset number to obtain the second probability value, so that when the second probability value satisfies the second preset condition, the step and the subsequent steps of generating the lesion probability map by using the reference feature map are performed, which can be beneficial to screen out negative data before detection in advance, thereby improving the accuracy and the efficiency of image detection.
In some embodiments, the first preset condition includes: the first probability value is greater than or equal to a first probability threshold; the second preset condition includes: the second probability value is greater than or equal to a second probability threshold; the predetermined process is an averaging operation.
Different from the foregoing embodiment, by setting the first preset condition to be that the first probability value is greater than or equal to the first probability threshold, by setting the second preset condition to be that the second probability value is greater than or equal to the second probability threshold, and setting the preset processing as an average operation, the amount of calculation of the second probability value can be reduced, and the second probability value can accurately reflect the possibility that the three-dimensional medical image contains a lesion, so that the step and the subsequent steps of generating the case probability map using the reference feature map can be performed when the first probability value is greater than or equal to the first probability threshold, and the step and the subsequent steps of generating the case probability map using the reference feature map can be performed when the second probability value is greater than or equal to the second probability threshold, so that negative data can be screened out in advance before detection, thereby improving the accuracy and efficiency of image detection.
In some embodiments, the execution determination module is further specifically configured to determine that the medical image to be tested does not contain a lesion when the first probability value does not satisfy the first preset condition or the second probability value does not satisfy the second preset condition.
Different from the foregoing embodiment, when the first probability value does not satisfy the first preset condition or the second probability value does not satisfy the second preset condition, it is determined that the medical image to be detected does not include a focus, so that the user can timely perceive a negative detection result of the medical image to be detected, and the user experience can be favorably improved.
In some embodiments, the image generating module 53 is specifically configured to statistically refer to gradient values of each pixel point in the feature map about the lesion, and generate a class activation map as a lesion probability map.
Different from the foregoing embodiment, the class activation map is generated by counting the gradient value of each pixel point in the reference feature map with respect to the lesion to serve as the lesion probability map, which can improve the accuracy of the lesion probability map, and thus can be beneficial to improving the accuracy of subsequent image detection.
In some embodiments, the image fusion module 54 includes an encoding processing sub-module configured to encode the reference feature map by using the lesion probability map to obtain a second feature map, and the image fusion module 54 includes a fusion processing sub-module configured to fuse the second feature map with the first feature maps of the plurality of dimensions to obtain a final fusion feature map.
Different from the embodiment, the reference feature map is encoded by using the lesion probability map to obtain a second feature map, and the second feature map is fused with the first feature maps with a plurality of dimensions to obtain a final fusion feature map, so that the lesion probability map can be used as a global feature to participate in feature map fusion, the specificity of the final fusion feature map on a lesion can be strengthened, and the accuracy of subsequent image detection can be improved.
In some embodiments, the encoding processing sub-module is specifically configured to multiply a pixel value of a first pixel point in the lesion probability map by a pixel value of a second pixel point corresponding to the first pixel point in the reference feature map, so as to obtain a pixel value of a corresponding pixel point of the second feature map.
Different from the embodiment, the pixel value of the corresponding pixel point of the second feature map is obtained by multiplying the pixel value of the first pixel point in the lesion probability map by the pixel value of the second pixel point corresponding to the first pixel point in the reference feature map, so that the encoding processing of the lesion probability map on the reference feature map is realized, and the reduction of the calculated amount can be facilitated.
In some embodiments, the fusion processing sub-module is specifically configured to fuse, in order from top to bottom in the dimension, the second feature map and the first feature map in each dimension in the order to obtain a final fusion feature map.
Different from the embodiment, the second feature graph and the first feature graph of each dimension are fused according to the order from the high dimension to the low dimension to obtain the final fused feature graph, and the feature graph fusion can be performed on a dimension-by-dimension basis, so that the context information can be fused sufficiently, the accuracy and feature richness of the final fused feature graph can be improved, and the accuracy of subsequent image detection can be improved.
In some embodiments, the reference feature map is a first feature map with the highest dimension, the fusion processing sub-module includes a first fusion unit, used for fusing the reference feature map and the first low-dimensional feature map to obtain a first fused feature map with the same dimension as the first low-dimensional feature map, wherein the first low-dimensional feature map is a first feature map with one dimension lower than the reference feature map, the fusion processing submodule comprises a second fusion unit, the fusion processing submodule is used for fusing the second feature map and the first fusion feature map to obtain a second fusion feature map with the same dimension as the first fusion feature map, and comprises a third fusion unit, the fusion module is used for repeatedly performing fusion on the second fusion feature map and the second low-dimensional feature map to obtain a new second fusion feature map with the same dimension as that of the second low-dimensional feature map until the fusion of the first feature maps with a plurality of dimensions is completed; the second low-dimensional feature map is a first feature map which is lower than the current second fusion feature map by one dimension, and the fusion processing submodule comprises a final fusion unit used for taking the second fusion feature map obtained by final fusion as a final fusion feature map.
Different from the foregoing embodiment, a reference feature map and a first low-dimensional feature map are fused to obtain a first fused feature map having the same dimension as the first low-dimensional feature map, the first low-dimensional feature map is a first feature map having a dimension lower than the reference feature map, a second feature map and the first fused feature map are fused to obtain a second fused feature map having the same dimension as the first fused feature map, so that the fusion of the second fused feature map and the second low-dimensional feature map is repeatedly performed to obtain a new second fused feature map having the same dimension as the second low-dimensional feature map until the fusion of the first feature maps of a plurality of dimensions is completed, and the second low-dimensional feature map is a first feature map having a dimension lower than the current second fused feature map, and the finally fused second fused feature map is used as a final fused feature map, so that a lesion probability map can be coupled with a decoding process of image detection as a global feature, the final fusion feature map can strengthen the specificity to the focus, and can fully fuse the context information of the feature map, improve the accuracy and feature richness of the final fusion feature map, and further be beneficial to improving the accuracy of subsequent image detection.
In some embodiments, the detection result includes a detection region of a focus in the medical image to be detected, the image detection apparatus 50 further includes an organ detection module, configured to perform organ detection on the medical image to be detected, so as to obtain an organ region in the medical image to be detected, and the image detection apparatus 50 further includes a ratio obtaining module, configured to obtain a ratio of the focus of the detection region of the focus in the organ region.
Different from the embodiment, the organ region in the medical image to be detected is obtained by performing organ detection on the medical image to be detected, the lesion proportion of the lesion detection region in the organ region is obtained, and the method can be beneficial to further generating reference information beneficial to clinic by using the detection result, so that the user experience can be improved.
In some embodiments, the image detection apparatus 50 further comprises a preprocessing module for preprocessing the medical image to be detected, wherein the preprocessing operation at least comprises: and normalizing the pixel value of the medical image to be detected to be within a preset range by utilizing the preset window value.
Different from the foregoing embodiment, before performing feature extraction on the medical image to be detected, the medical image to be detected is preprocessed, and the preprocessing operation at least includes: the pixel value of the medical image to be detected is normalized to the preset range by utilizing the preset window value, so that the contrast of the medical image to be detected can be favorably enhanced, and the accuracy of the subsequently extracted first characteristic diagram can be favorably improved.
In some embodiments, the feature extraction module 52 is specifically configured to perform feature extraction on the medical image to be detected by using a feature extraction sub-network of the image detection model to obtain first feature maps of a plurality of dimensions, the image fusion module 54 is specifically configured to perform fusion on the lesion probability map and the first feature maps of the plurality of dimensions by using a fusion processing sub-network of the image detection model to obtain a final fusion feature map, and the detection processing module 55 is specifically configured to perform detection processing on the final fusion feature map by using the fusion processing sub-network of the image detection model to obtain a detection result about a lesion in the medical image to be detected.
Different from the embodiment, the feature extraction is performed on the medical image to be detected by using the feature extraction sub-network of the image detection model to obtain the first feature maps with a plurality of dimensions, the focus probability map is fused with the first feature maps with a plurality of dimensions by using the fusion processing sub-network of the image detection model to obtain the final fusion feature map, and the final fusion feature map is detected by using the fusion processing sub-network of the image detection model to obtain the detection result of the focus in the medical image to be detected, so that the feature extraction, the fusion processing and the image detection tasks are performed by using the image detection model, and the improvement of the image detection efficiency can be facilitated.
In some embodiments, the image detection apparatus 50 includes a sample image obtaining module configured to obtain a sample medical image, where the sample medical image includes an actual region of a lesion, the image detection apparatus 50 includes a sample feature extraction module configured to perform feature extraction on the sample medical image by using a sub-network of feature extraction to obtain a first sample feature map of several dimensions, the image detection apparatus 50 includes a probability image generation module configured to generate a probability map of the lesion sample by using the first sample feature map of preset dimensions as a reference sample feature map and using the reference sample feature map, where the probability map of the lesion sample is used to represent probabilities that different regions in the sample medical image belong to the lesion, and the image detection apparatus 50 includes a sample image fusion module configured to fuse the probability map of the lesion sample with the first sample feature maps of several dimensions by using a sub-network of fusion processing, and obtaining a final fusion sample characteristic diagram, wherein the image detection device 50 comprises a sample detection processing module for performing detection processing on the final fusion sample characteristic diagram by using a fusion processing sub-network to obtain a detection region about a focus in the sample medical image, and the image detection device 50 comprises a training adjustment module for adjusting network parameters of an image detection model by using the difference between the actual region and the detection region.
Different from the embodiment, the method comprises the steps of obtaining a sample medical image, wherein the sample medical image comprises an actual region of a focus, extracting features of the sample medical image by using a feature extraction sub-network to obtain a first sample feature map with a plurality of dimensions, generating a focus sample probability map by using the first sample feature map with preset dimensions as a reference sample feature map, wherein the focus sample probability map is used for representing the probability that different regions in the sample medical image belong to the focus, fusing the focus probability map and the first sample feature maps with the dimensions to obtain a final fused sample feature map, detecting the final fused sample feature map to obtain a detection region of the focus in the sample medical image, and adjusting network parameters of an image detection model by using the difference between the actual region and the detection region, therefore, in the training process of the image detection model, the focus sample probability graph is used as the global feature to be coupled with the decoding process of the image detection, so that the specificity of the final fused sample feature graph on the focus can be strengthened, the sensitivity degree of the image detection model on the focus can be enhanced, and the training speed of the model can be improved.
In some embodiments, the training adjustment module includes a loss determination sub-module, configured to process the actual region and the detection region by using a set similarity loss function, and determine a loss value of the image detection model, and the training adjustment module includes a parameter adjustment sub-module, configured to adjust a network parameter of the image detection model at a preset learning rate by using the loss value.
Different from the embodiment, the method and the device utilize the set similarity loss function to process the actual region and the detection region, determine the loss value of the image detection model, and can ensure the accuracy of the loss value, so that the network parameters of the image detection model are adjusted by utilizing the loss value pair at a preset learning rate, the difference between the detection region and the actual region can be reduced in the training process, and the accuracy of the image detection model is improved.
Referring to fig. 6, fig. 6 is a schematic diagram of a frame of an embodiment of an electronic device 60 according to the present application. The electronic device 60 comprises a memory 61 and a processor 62 coupled to each other, the processor 62 being configured to execute program instructions stored in the memory 61 to implement the steps of any of the above-described embodiments of the image detection method. In one particular implementation scenario, electronic device 60 may include, but is not limited to: a microcomputer, a server, and in addition, the electronic device 60 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 62 is configured to control itself and the memory 61 to implement the steps of any of the above-described embodiments of the image detection method. The processor 62 may also be referred to as a CPU (Central Processing Unit). The processor 62 may be an integrated circuit chip having signal processing capabilities. The Processor 62 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 62 may be collectively implemented by an integrated circuit chip.
According to the scheme, the accuracy of image detection can be improved.
Referring to fig. 7, fig. 7 is a block diagram illustrating an embodiment of a computer readable storage medium 70 according to the present application. The computer readable storage medium 70 stores program instructions 701 executable by a processor, the program instructions 701 being for implementing the steps of any of the above-described embodiments of the image detection method.
According to the scheme, the accuracy of image detection can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (15)

1. An image detection method, comprising:
acquiring a medical image to be detected;
performing feature extraction on the medical image to be detected to obtain a first feature map with a plurality of dimensions;
taking the first feature map with preset dimensionality as a reference feature map, and generating a lesion probability map by using the reference feature map, wherein the lesion probability map is used for representing the probability that different regions in the medical image to be detected belong to lesions;
fusing the focus probability graph and the first feature graphs of the dimensions to obtain a final fused feature graph;
and detecting the final fusion characteristic diagram to obtain a detection result about the focus in the medical image to be detected.
2. The method of claim 1, wherein prior to generating the lesion probability map using the reference feature map, the method further comprises:
performing prediction processing by using the reference feature map to obtain a first probability value of the medical image to be detected, wherein the first probability value contains the focus;
determining whether to perform the step of generating a lesion probability map using the reference feature map and subsequent steps based on the first probability value.
3. The method of claim 2, wherein said determining whether to perform said step of generating a lesion probability map using said reference feature map and subsequent steps based on said first probability value comprises:
if the first probability value meets a first preset condition, executing the step of generating the focus probability map by using the reference feature map and subsequent steps; or
If the medical image to be measured is a two-dimensional medical image included in a three-dimensional medical image, the determining whether to execute the step of generating a lesion probability map using the reference feature map and the subsequent steps based on the first probability value includes:
sorting the first probability values corresponding to the two-dimensional medical images according to a descending order, and selecting a preset number of the first probability values;
presetting the preset number of first probability values to obtain a second probability value;
and if the second probability value meets a second preset condition, executing the step of generating the focus probability map by using the reference feature map and the subsequent steps.
4. The method according to claim 3, wherein the first preset condition comprises: the first probability value is greater than or equal to a first probability threshold; the second preset condition includes: the second probability value is greater than or equal to a second probability threshold; the preset processing is average operation;
and/or, the method further comprises:
and if the first probability value does not meet a first preset condition or the second probability value does not meet a second preset condition, determining that the medical image to be detected does not contain the focus.
5. The method according to any one of claims 1 to 4, wherein the generating a lesion probability map using the reference feature map comprises:
and counting gradient values of all pixel points in the reference characteristic graph about the focus to generate a class activation graph as the focus probability graph.
6. The method according to any one of claims 1 to 5, wherein the fusing the lesion probability map with the first feature map of the plurality of dimensions to obtain a final fused feature map comprises:
coding the reference characteristic diagram by using the focus probability diagram to obtain a second characteristic diagram;
and fusing the second feature map and the first feature maps of the dimensions to obtain a final fused feature map.
7. The method according to claim 6, wherein the encoding the reference feature map using the lesion probability map to obtain a second feature map comprises:
multiplying the pixel value of a first pixel point in the lesion probability map by the pixel value of a second pixel point corresponding to the first pixel point in the reference feature map to obtain the pixel value of the corresponding pixel point of the second feature map;
and/or, the fusing the second feature map with the first feature maps of the plurality of dimensions to obtain a final fused feature map, including:
and according to the order of the dimensions from top to bottom, fusing the second feature graph with the first feature graph of each dimension ordered according to the order to obtain a final fused feature graph.
8. The method of claim 7, wherein the reference feature map is the first feature map with the highest dimension; according to the order of the dimensions from top to bottom, the second feature graph is fused with the first feature graph of each dimension ordered according to the order to obtain a final fused feature graph, and the method comprises the following steps:
fusing the reference feature map and a first low-dimensional feature map to obtain a first fused feature map with the same dimension as that of the first low-dimensional feature map, wherein the first low-dimensional feature map is the first feature map with one dimension lower than that of the reference feature map;
fusing the second feature map and the first fused feature map to obtain a second fused feature map with the same dimension as the first fused feature map;
repeatedly performing fusion on the second fusion feature map and a second low-dimensional feature map to obtain a new second fusion feature map with the same dimension as that of the second low-dimensional feature map until the first feature maps of the plurality of dimensions are completely fused, wherein the second low-dimensional feature map is the first feature map which is one dimension lower than the current second fusion feature map;
and taking the second fusion feature map obtained by final fusion as the final fusion feature map.
9. The method according to any one of claims 1 to 8, wherein the detection result includes a detection region of the lesion in the medical image to be tested, the method further comprising:
performing organ detection on the medical image to be detected to obtain an organ area in the medical image to be detected;
acquiring the proportion of the focus detection area in the organ area;
and/or before the feature extraction is performed on the medical image to be detected to obtain the first feature maps with a plurality of dimensions, the method further comprises the following steps:
preprocessing the medical image to be detected, wherein the preprocessing operation at least comprises the following steps: and normalizing the pixel value of the medical image to be detected to a preset range by using a preset window value.
10. The method according to any one of claims 1 to 9, wherein the performing feature extraction on the medical image to be tested to obtain a first feature map with a plurality of dimensions includes:
performing feature extraction on the medical image to be detected by using a feature extraction sub-network of an image detection model to obtain the first feature maps of the dimensions;
the fusing the lesion probability map and the first feature maps of the dimensions to obtain a final fused feature map, and performing detection processing on the final fused feature map to obtain a detection result about the lesion in the medical image to be detected, includes:
fusing the focus probability map and the first feature maps of the dimensions by utilizing a fusion processing sub-network of the image detection model to obtain a final fusion feature map;
and detecting the final fusion characteristic diagram by utilizing a fusion processing sub-network of the image detection model to obtain a detection result of the focus in the medical image to be detected.
11. The method according to claim 10, wherein before the feature extraction of the medical image to be detected by using the feature extraction sub-network of the image detection model to obtain the first feature maps of the several dimensions, the method further comprises:
acquiring a sample medical image, wherein the sample medical image comprises an actual region of a focus;
performing feature extraction on the sample medical image by using the feature extraction sub-network to obtain a first sample feature map with a plurality of dimensions;
taking the first sample feature map with preset dimensionality as a reference sample feature map, and generating a lesion sample probability map by using the reference sample feature map, wherein the lesion sample probability map is used for representing the probability that different regions in the sample medical image belong to a lesion;
fusing the focus sample probability graph and the first sample feature graphs of the dimensions by utilizing the fusion processing sub-network to obtain a final fusion sample feature graph;
detecting the final fusion sample characteristic diagram by using the fusion processing sub-network to obtain a detection area about the focus in the sample medical image;
and adjusting the network parameters of the image detection model by using the difference between the actual area and the detection area.
12. The method of claim 11, wherein the adjusting the network parameters of the image detection model using the difference between the actual region and the detection region comprises:
processing the actual region and the detection region by adopting a set similarity loss function to determine a loss value of the image detection model;
and adjusting the network parameters of the image detection model by using the loss value at a preset learning rate.
13. An image detection apparatus, characterized by comprising:
the image acquisition module is used for acquiring a medical image to be detected;
the characteristic extraction module is used for extracting the characteristics of the medical image to be detected to obtain a first characteristic diagram with a plurality of dimensions;
the image generation module is used for generating a lesion probability map by using the first feature map with preset dimensionality as a reference feature map, wherein the lesion probability map is used for representing the probability that different regions in the medical image to be detected belong to a lesion;
the image fusion module is used for fusing the focus probability map and the first feature maps of the dimensions to obtain a final fusion feature map;
and the detection processing module is used for detecting and processing the final fusion characteristic diagram to obtain a detection result about the focus in the medical image to be detected.
14. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image detection method of any one of claims 1 to 12.
15. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the image detection method of any one of claims 1 to 12.
CN202110214861.XA 2021-02-25 2021-02-25 Image detection method and related device and equipment Pending CN112949654A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110214861.XA CN112949654A (en) 2021-02-25 2021-02-25 Image detection method and related device and equipment
PCT/CN2021/117801 WO2022179083A1 (en) 2021-02-25 2021-09-10 Image detection method and apparatus, and device, medium and program
JP2022549312A JP2023518160A (en) 2021-02-25 2021-09-10 Image detection method, apparatus, device, medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110214861.XA CN112949654A (en) 2021-02-25 2021-02-25 Image detection method and related device and equipment

Publications (1)

Publication Number Publication Date
CN112949654A true CN112949654A (en) 2021-06-11

Family

ID=76246421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110214861.XA Pending CN112949654A (en) 2021-02-25 2021-02-25 Image detection method and related device and equipment

Country Status (3)

Country Link
JP (1) JP2023518160A (en)
CN (1) CN112949654A (en)
WO (1) WO2022179083A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179083A1 (en) * 2021-02-25 2022-09-01 上海商汤智能科技有限公司 Image detection method and apparatus, and device, medium and program
WO2023160157A1 (en) * 2022-02-28 2023-08-31 腾讯科技(深圳)有限公司 Three-dimensional medical image recognition method and apparatus, and device, storage medium and product
CN116798596A (en) * 2022-03-14 2023-09-22 数坤(北京)网络科技股份有限公司 Information association method, device, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597988B (en) * 2023-07-18 2023-09-19 济南蓝博电子技术有限公司 Intelligent hospital operation method and system based on medical information

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110216951A1 (en) * 2010-03-03 2011-09-08 Medicsight Plc Medical Image Processing
WO2017086433A1 (en) * 2015-11-19 2017-05-26 国立大学法人 東京大学 Medical image processing method, device, system, and program
CN107945168A (en) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 The processing method and magic magiscan of a kind of medical image
CN109949271A (en) * 2019-02-14 2019-06-28 腾讯科技(深圳)有限公司 A kind of detection method based on medical image, the method and device of model training
CN110807788A (en) * 2019-10-21 2020-02-18 腾讯科技(深圳)有限公司 Medical image processing method, device, electronic equipment and computer storage medium
CN111161279A (en) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 Medical image segmentation method and device and server
CN111768418A (en) * 2020-06-30 2020-10-13 北京推想科技有限公司 Image segmentation method and device and training method of image segmentation model
CN112116004A (en) * 2020-09-18 2020-12-22 推想医疗科技股份有限公司 Focus classification method and device and focus classification model training method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373312B2 (en) * 2016-11-06 2019-08-06 International Business Machines Corporation Automated skin lesion segmentation using deep side layers
CN109886282B (en) * 2019-02-26 2021-05-28 腾讯科技(深圳)有限公司 Object detection method, device, computer-readable storage medium and computer equipment
CN110348541B (en) * 2019-05-10 2021-12-10 腾讯医疗健康(深圳)有限公司 Method, device and equipment for classifying fundus blood vessel images and storage medium
CN111429473B (en) * 2020-02-27 2023-04-07 西北大学 Chest film lung field segmentation model establishment and segmentation method based on multi-scale feature fusion
CN112949654A (en) * 2021-02-25 2021-06-11 上海商汤智能科技有限公司 Image detection method and related device and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110216951A1 (en) * 2010-03-03 2011-09-08 Medicsight Plc Medical Image Processing
WO2017086433A1 (en) * 2015-11-19 2017-05-26 国立大学法人 東京大学 Medical image processing method, device, system, and program
CN107945168A (en) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 The processing method and magic magiscan of a kind of medical image
CN109949271A (en) * 2019-02-14 2019-06-28 腾讯科技(深圳)有限公司 A kind of detection method based on medical image, the method and device of model training
CN110807788A (en) * 2019-10-21 2020-02-18 腾讯科技(深圳)有限公司 Medical image processing method, device, electronic equipment and computer storage medium
CN111161279A (en) * 2019-12-12 2020-05-15 中国科学院深圳先进技术研究院 Medical image segmentation method and device and server
CN111768418A (en) * 2020-06-30 2020-10-13 北京推想科技有限公司 Image segmentation method and device and training method of image segmentation model
CN112116004A (en) * 2020-09-18 2020-12-22 推想医疗科技股份有限公司 Focus classification method and device and focus classification model training method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周涛 等: "多模态医学影像融合识别技术研究进展", 生物医学工程学杂志, vol. 30, no. 05, pages 1117 - 1122 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022179083A1 (en) * 2021-02-25 2022-09-01 上海商汤智能科技有限公司 Image detection method and apparatus, and device, medium and program
WO2023160157A1 (en) * 2022-02-28 2023-08-31 腾讯科技(深圳)有限公司 Three-dimensional medical image recognition method and apparatus, and device, storage medium and product
CN116798596A (en) * 2022-03-14 2023-09-22 数坤(北京)网络科技股份有限公司 Information association method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
JP2023518160A (en) 2023-04-28
WO2022179083A1 (en) 2022-09-01

Similar Documents

Publication Publication Date Title
CN108446730B (en) CT pulmonary nodule detection device based on deep learning
US10706333B2 (en) Medical image analysis method, medical image analysis system and storage medium
CN109919928B (en) Medical image detection method and device and storage medium
CN112949654A (en) Image detection method and related device and equipment
CN112070781B (en) Processing method and device of craniocerebral tomography image, storage medium and electronic equipment
CN109754403A (en) Tumour automatic division method and system in a kind of CT image
CN110276741B (en) Method and device for nodule detection and model training thereof and electronic equipment
CN114549552A (en) Lung CT image segmentation device based on space neighborhood analysis
CN111583285A (en) Liver image semantic segmentation method based on edge attention strategy
CN109034218B (en) Model training method, device, equipment and storage medium
Yamanakkanavar et al. MF2-Net: A multipath feature fusion network for medical image segmentation
CN115222713A (en) Method and device for calculating coronary artery calcium score and storage medium
CN112396605A (en) Network training method and device, image recognition method and electronic equipment
CN115601299A (en) Intelligent liver cirrhosis state evaluation system and method based on images
CN115100494A (en) Identification method, device and equipment of focus image and readable storage medium
WO2021184195A1 (en) Medical image reconstruction method, and medical image reconstruction network training method and apparatus
Shah et al. Kidney tumor segmentation and classification on abdominal CT scans
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
WO2022227193A1 (en) Liver region segmentation method and apparatus, and electronic device and storage medium
CN115131361A (en) Training of target segmentation model, focus segmentation method and device
CN114649092A (en) Auxiliary diagnosis method and device based on semi-supervised learning and multi-scale feature fusion
CN111476775B (en) DR symptom identification device and method
JP7352261B2 (en) Learning device, learning method, program, trained model, and bone metastasis detection device
Vinta et al. Segmentation and Classification of Interstitial Lung Diseases Based on Hybrid Deep Learning Network Model
WO2022138277A1 (en) Learning device, method, and program, and medical image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40045100

Country of ref document: HK