CN115601299A - Intelligent liver cirrhosis state evaluation system and method based on images - Google Patents

Intelligent liver cirrhosis state evaluation system and method based on images Download PDF

Info

Publication number
CN115601299A
CN115601299A CN202211081848.2A CN202211081848A CN115601299A CN 115601299 A CN115601299 A CN 115601299A CN 202211081848 A CN202211081848 A CN 202211081848A CN 115601299 A CN115601299 A CN 115601299A
Authority
CN
China
Prior art keywords
image
liver
neural network
convolutional neural
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211081848.2A
Other languages
Chinese (zh)
Inventor
邓敏
沈亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Hospital of Jiaxing
Original Assignee
First Hospital of Jiaxing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Hospital of Jiaxing filed Critical First Hospital of Jiaxing
Priority to CN202211081848.2A priority Critical patent/CN115601299A/en
Publication of CN115601299A publication Critical patent/CN115601299A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of intelligent medical treatment, and specifically discloses an intelligent liver cirrhosis state evaluation system based on images and a method thereof, wherein noise of a noise reduction module based on an automatic encoder is used for filtering liver images, local binary patterning processing and Canny edge detection are carried out on the noise-reduced liver images so as to obtain local binary pattern images and Canny edge detection images, then the local binary pattern images, the Canny edge detection images and the noise-reduced liver images are combined so as to obtain multi-channel liver images, and in such a way, the input of a neural network is expanded so that a feature extractor based on the neural network can extract richer features, and then the accuracy of liver cirrhosis degree evaluation is improved.

Description

Intelligent liver cirrhosis state evaluation system and method based on images
Technical Field
The application relates to the field of intelligent medical treatment, in particular to an intelligent liver cirrhosis state evaluation system and method based on images.
Background
Liver cirrhosis is a chronic progressive liver disease which is common in clinic, and has the pathological histology of extensive hepatocyte necrosis, nodular regeneration of residual hepatocytes, connective tissue hyperplasia and fibrosepta formation, which cause the hepatic lobule structure to be damaged and the pseudolobule to be formed, and the liver gradually deforms and hardens to develop into liver cirrhosis.
In the process of treating the liver cirrhosis, the degree of the liver cirrhosis is firstly precisely evaluated to determine the stage of the liver cirrhosis, so that a correct diagnosis and treatment and care scheme is given. Traditional assessment of the degree of cirrhosis is performed by a clinician, specifically, the patient first goes to the imaging department to collect liver images, and then the obtained liver images are handed to the clinician, who gives the patient clinical analysis and decision-making opinions by observing the characteristics of the liver in the liver images. But this approach is inefficient. Secondly, if the patient needs to be evaluated for the degree of cirrhosis by the clinician every time, it is very inconvenient if the patient wants to understand his physical condition more frequently.
In recent years, development of medical big data and deep neural network technology based on deep learning provides technical support for intelligent medical treatment, and accordingly, a liver cirrhosis degree evaluation scheme based on medical images is expected.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides an image-based intelligent evaluation system for cirrhosis state and a method thereof, wherein noise of a liver image is filtered by using a noise reduction module based on an automatic encoder, local binary patterning processing and Canny edge detection are carried out on the noise-reduced liver image to obtain a local binary pattern image and a Canny edge detection image, and then the local binary pattern image, the Canny edge detection image and the noise-reduced liver image are combined to obtain a multi-channel liver image.
According to an aspect of the present application, there is provided an image-based liver cirrhosis state intelligent evaluation system, including:
the image acquisition module is used for acquiring a liver image of a patient to be detected;
the image denoising module is used for enabling the liver image of the patient to be detected to pass through the denoising module based on the automatic encoder to obtain a denoised liver image;
the texture feature extraction module is used for carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction to obtain a local binary pattern image and a Canny edge detection image;
the multi-channel merging module is used for merging the local binary pattern image, the Canny edge detection image and the de-noised liver image to obtain a multi-channel liver image;
a dual-stream encoding module, configured to pass the multichannel liver image through a dual-stream network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, where the first convolutional neural network uses a first convolutional kernel having a first size, and the second convolutional neural network uses a second convolutional kernel having a second size, and the first size is different from the second size; and
and the evaluation result generation module is used for enabling the classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is a liver cirrhosis degree grade label of the patient to be detected.
In the above intelligent evaluation system for cirrhosis state based on image, the image denoising module includes: the convolutional coding unit is used for inputting the liver image of the patient to be detected into the coder of the noise reduction module, wherein the coder uses a convolutional layer to perform explicit spatial coding on the liver image of the patient to be detected so as to obtain image characteristics; and the deconvolution coding unit is used for inputting the image characteristics into a decoder of the noise reduction module, wherein the decoder uses a deconvolution layer to perform deconvolution processing on the image characteristics so as to obtain the noise-reduced liver image.
In the above intelligent evaluation system for cirrhosis state based on images, the multi-channel merging module is further configured to merge the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image along a channel dimension by the following formula to obtain the multi-channel liver image; wherein the formula is F h =Concat[F 1 ,F 2 ,F 3 ] c Wherein F is 1 As a local binary pattern map, F 2 For the Canny edge detection map, F 3 Coucat [. To reduce the noise of the liver image] c Representing the merge function along the channel dimension.
In the above intelligent evaluation system for cirrhosis status based on image, the dual-stream encoding module includes: a first depth convolutional coding unit, configured to perform convolutional coding on the multi-channel liver image by using the first convolutional neural network with the first convolutional core, so as to extract first feature maps from each layer of the first convolutional neural network respectively to obtain a plurality of first feature maps; a second depth convolution coding unit, configured to perform convolution coding on the multi-channel liver image by using the second convolution neural network and the second convolution kernel so as to extract a second feature map from each layer of the second convolution neural network respectively to obtain a plurality of second feature maps; and the dense connection unit is used for respectively fusing the first feature maps and the second feature maps with corresponding depths in each group of the plurality of first feature maps and the plurality of second feature maps to obtain a plurality of fused feature maps.
In the above system for intelligently evaluating a cirrhosis state based on an image, the first deep convolutional coding unit is further configured to: performing convolution processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using the layers of the first convolutional neural network to extract first feature maps from the layers of the first convolutional neural network respectively to obtain a plurality of first feature maps, wherein an input of the first layer of the first convolutional neural network is the multi-channel liver image, and the first convolutional neural network uses a first convolution kernel having a first size.
In the above intelligent evaluation system for cirrhosis state based on image, the second deep convolutional coding unit is further configured to perform convolutional processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using layers of the second convolutional neural network to extract second feature maps from the layers of the second convolutional neural network respectively to obtain a plurality of second feature maps, where an input of a first layer of the second convolutional neural network is the multi-channel liver image, and the second convolutional neural network uses a second convolutional kernel having a second size, and the first size is different from the second size.
In the above intelligent evaluation system for cirrhosis state based on image, the dual-stream encoding module further includes: the optimization unit is used for respectively calculating channel squeezing-excitation optimization factors of depth recursion of each fused feature map based on the statistical features of the feature value sets of all the positions of each fused feature map in the plurality of fused feature maps; and the fusion unit is used for weighting each fusion feature map in the fusion feature maps by taking the depth recursive channel squeezing-excitation optimization factor of each fusion feature map as weight so as to obtain a plurality of weighted fusion feature maps serving as the classification feature map.
In the above intelligent evaluation system for cirrhosis state based on image, the optimization unit is further configured to: calculating the optimization factors corresponding to the feature values of all positions of all the fused feature maps in the plurality of fused feature maps according to the following formula based on the mean value and the variance of the feature value sets of all the positions of all the fused feature maps in the plurality of fused feature maps; wherein the formula is:
Figure BDA0003833556000000031
wherein v is i And the index operation of the characteristic value represents the calculation of a natural index function value taking the characteristic value as power.
In the above intelligent evaluation system for cirrhosis state based on image, the evaluation result generating module is further configured to: processing the classification feature map using the classifier to generate a classification result in accordance with the following formula:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project(F)}
wherein Project (F) represents projecting the classification matrix map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the layers of the fully connected layer.
According to another aspect of the present application, there is also provided an image-based intelligent assessment method for cirrhosis state, comprising:
acquiring a liver image of a patient to be detected;
enabling the liver image of the patient to be detected to pass through a noise reduction module based on an automatic encoder to obtain a noise-reduced liver image;
carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction to obtain a local binary pattern image and a Canny edge detection image;
merging the local binary pattern image, the Canny edge detection image and the denoised liver image to obtain a multi-channel liver image;
passing the multichannel liver image through a dual-flow network structure comprising a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel with a first size, and the second convolutional neural network uses a second convolutional kernel with a second size, and the first size is different from the second size; and
and passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is a liver cirrhosis degree grade label of the patient to be detected.
In the above intelligent assessment method for cirrhosis state based on images, the merging the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image to obtain a multi-channel liver image, further includes: merging the local binary pattern map, the Canny edge detection map and the denoised liver image along a channel dimension to obtain the multi-channel liver image according to the following formula; wherein the formula is F h =Concat[F 1 ,F 2 ,F 3 ] c Wherein F is 1 As a local binary pattern map, F 2 For the Canny edge detection map, F 3 Concat [. For the de-noised liver image] c Representing the merge function along the channel dimension.
In the above intelligent evaluation method for liver cirrhosis state based on image, the passing the classification feature map through a classifier to obtain a classification result, where the classification result is a grade label of liver cirrhosis degree of a patient to be detected, further includes: processing the classification feature map using the classifier to generate a classification result according to the following formula:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project(F)}
wherein Project (F) represents projecting the classification matrix map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the image-based intelligent assessment method of cirrhosis state as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to execute the image-based intelligent assessment method of liver cirrhosis state as described above.
Compared with the prior art, the intelligent evaluation system and method for the cirrhosis state based on the images have the advantages that noise of the liver images is filtered by the noise reduction module based on the automatic encoder, local binary patterning processing and Canny edge detection are carried out on the noise-reduced liver images to obtain local binary pattern images and Canny edge detection images, then the local binary pattern images, the Canny edge detection images and the noise-reduced liver images are combined to obtain multi-channel liver images, and through the mode, input of a neural network is expanded so that a feature extractor based on the neural network can extract richer features, and accuracy of cirrhosis degree evaluation is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a block diagram of an image-based intelligent assessment system for cirrhosis state according to an embodiment of the present application.
Fig. 2 illustrates a system architecture diagram of an image-based intelligent assessment system for cirrhosis state according to an embodiment of the present application.
Fig. 3 illustrates a block diagram of a dual-stream encoding module in an image-based intelligent evaluation system for cirrhosis state according to an embodiment of the present application.
Fig. 4 illustrates a flowchart of an image-based intelligent assessment method for cirrhosis state according to an embodiment of the present application.
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
As described above, the analysis of the conventional imaging examination of liver cirrhosis is performed by the attending physician, but the resources of the clinician are limited, and often the patient needs to wait for a long time to get the opportunity of diagnosis and analysis.
Correspondingly, in the technical scheme of the application, firstly, the liver image of the patient to be detected is obtained. Considering that image noise is introduced due to the self-reason of the device and the environmental interference during image acquisition when image acquisition is performed, before liver image feature extraction is performed, noise reduction processing is performed on the liver image of the patient to be detected to obtain the noise-reduced liver image.
As described above, when evaluating the degree of cirrhosis, more attention needs to be paid to the texture features on the liver surface, and in order to enhance the sensing and extracting capability of the feature extractor based on the deep neural network model for the texture features, in the technical solution of the present application, first, the local binary pattern processing and Canny edge detection are performed on the noise-reduced liver image to obtain a local binary pattern map and a Canny edge detection map, and then, the local binary pattern map, the Canny edge detection map and the noise-reduced liver image are merged to obtain a multi-channel liver image.
The specific principle is that 3 × 3 is used as a window unit, if the value of the surrounding pixel is greater than the value of the central pixel, the pixel is marked as 1, otherwise, the pixel is marked as 0, then the pixels in the neighborhood are binarized, and the obtained values are multiplied by the binary sequence correspondingly and added to obtain the LBP of the central pixel. Canny edge detection can preserve edge contours while removing other texture-independent image renderings. After the local binary pattern map and the Canny edge detection map are obtained, the local binary pattern map, the Canny edge detection map and the liver image after noise reduction are merged into 5 channels to serve as the input of the network, the data width of the network input end is expanded, the network can learn and express more abundant things, and the accuracy is improved.
Then, the multichannel liver image is passed through a dual-flow network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel with a first size, and the second convolutional neural network uses a second convolutional kernel with a second size, and the first size is different from the second size. That is, the multi-channel liver image is passed through a dual-flow network structure having two image feature extractors, wherein the two image feature extractors can perform feature perception and filtering on the multi-channel liver image with different receptive fields to obtain a classification feature map.
Particularly, in the technical solution of the present application, on one hand, when the feature extractor based on the convolutional neural network model extracts image features, the shallow features are more features such as lines, textures, shapes, and the like, and the deep features are more abstract features representing objects; on the other hand, texture features are present in different image areas, and therefore, in the technical solution of the present application, on one hand, the first convolutional neural network and the second convolutional neural network use convolution kernels with different sizes so that the first convolutional neural network and the second convolutional neural network have different feature receptive fields, and on the other hand, feature maps (including a depth feature map and a shallow feature map) are extracted from different layers of the first convolutional neural network and the second convolutional neural network, and feature maps of corresponding depths are fused to obtain a plurality of fused feature maps as classification feature maps.
In particular, in the technical solution of the present application, as for the plurality of fused feature maps, since the plurality of fused feature maps are obtained by fusing a pair of the first feature map and the second feature map at the depths corresponding to the plurality of first feature maps and the second feature map, when the plurality of fused feature maps are classified by the classifier, it is desirable that the plurality of fused feature maps have high expression consistency in the depth dimension to improve the classification accuracy.
Thus, for each fused feature map, denoted F for example, a channel compression-excitation optimization factor is calculated for its depth recursion, expressed as:
Figure BDA0003833556000000081
μ and σ are feature sets f i E mean and variance of F.
The depth-recursive squeeze-excitation optimization factor activates depth recursion of feature distribution based on statistical characteristics of feature sets in the depth direction to infer the distribution of features at each sampling depth thereof, and a squeeze-excitation mechanism composed of a ReLU-Sigmoid function is employed to obtain an attention-enhanced depth confidence value, whereby expression consistency of the plurality of fused feature maps in the depth dimension can be improved by weighting the plurality of fused feature maps with the same as a weighting coefficient. Thus, the accuracy of evaluation of the degree of cirrhosis is improved.
Based on this, the present application proposes an image-based intelligent evaluation system for cirrhosis state, which includes: the image acquisition module is used for acquiring a liver image of a patient to be detected; the image denoising module is used for enabling the liver image of the patient to be detected to pass through the denoising module based on the automatic encoder so as to obtain a denoised liver image; the texture feature extraction module is used for carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction to obtain a local binary pattern image and a Canny edge detection image; the multi-channel merging module is used for merging the local binary pattern image, the Canny edge detection image and the de-noised liver image to obtain a multi-channel liver image; a dual-stream encoding module, configured to pass the multichannel liver image through a dual-stream network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, where the first convolutional neural network uses a first convolutional kernel having a first size, and the second convolutional neural network uses a second convolutional kernel having a second size, and the first size is different from the second size; and the evaluation result generation module is used for enabling the classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is a liver cirrhosis degree grade label of the patient to be detected.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 1 illustrates a block diagram of an image-based intelligent assessment system for cirrhosis state according to an embodiment of the present application. As shown in fig. 1, an image-based intelligent evaluation system 100 for liver cirrhosis status according to an embodiment of the present application includes: the image acquisition module 110 is used for acquiring a liver image of a patient to be detected; the image denoising module 120 is configured to pass the liver image of the patient to be detected through a denoising module based on an automatic encoder to obtain a denoised liver image; the texture feature extraction module 130 is configured to perform local binary patterning processing and Canny edge detection on the denoised liver image to obtain a local binary pattern map and a Canny edge detection map; a multi-channel merging module 140, configured to merge the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image to obtain a multi-channel liver image; a dual-stream encoding module 150, configured to pass the multi-channel liver image through a dual-stream network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel having a first size, and the second convolutional neural network uses a second convolutional kernel having a second size, and the first size is different from the second size; and an evaluation result generation module 160, configured to pass the classification feature map through a classifier to obtain a classification result, where the classification result is a liver cirrhosis degree grade label of the patient to be detected.
Fig. 2 illustrates a system architecture diagram of the image-based intelligent evaluation system 100 for liver cirrhosis state according to an embodiment of the present application. As shown in fig. 2, in the system architecture of the intelligent evaluation system 100 for cirrhosis state based on images, first, an image of the liver of a patient to be detected is obtained. And then, the liver image of the patient to be detected passes through a noise reduction module based on an automatic encoder to obtain a noise-reduced liver image. And then, carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction to obtain a local binary pattern image and a Canny edge detection image. And then combining the local binary pattern image, the Canny edge detection image and the denoised liver image to obtain a multi-channel liver image. Then, the multichannel liver image is passed through a dual-flow network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel with a first size, and the second convolutional neural network uses a second convolutional kernel with a second size, and the first size is different from the second size. And then, passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is a liver cirrhosis degree grade label of the patient to be detected.
In the above intelligent evaluation system 100 for cirrhosis state based on images, the image acquisition module 110 is configured to acquire a liver image of a patient to be detected. The traditional analysis of the imaging examination of the cirrhosis is performed by a main doctor, but the resources of clinicians are limited, and often a patient needs to wait for a long time to get the opportunity of diagnosis and analysis. Correspondingly, in the technical scheme of the application, firstly, the liver image of the patient to be detected is obtained.
In the above intelligent evaluation system 100 for cirrhosis state based on image, the image denoising module 120 is configured to pass the liver image of the patient to be detected through a denoising module based on an automatic encoder to obtain a denoised liver image. Considering that image noise is introduced due to the self-reason of the device and the environmental interference during image acquisition when image acquisition is performed, before liver image feature extraction is performed, noise reduction processing is performed on the liver image of the patient to be detected to obtain the noise-reduced liver image.
In one example, in the above intelligent evaluation system 100 for cirrhosis state based on images, the image denoising module 120 includes: the convolutional coding unit is used for inputting the liver image of the patient to be detected into the encoder of the noise reduction module, wherein the encoder uses a convolutional layer to perform explicit spatial coding on the liver image of the patient to be detected so as to obtain image characteristics; and the deconvolution coding unit is used for inputting the image characteristics into a decoder of the noise reduction module, wherein the decoder uses a deconvolution layer to perform deconvolution processing on the image characteristics so as to obtain the noise-reduced liver image.
In the above intelligent evaluation system 100 for cirrhosis state based on an image, the texture feature extraction module 130 is configured to perform local binary patterning processing and Canny edge detection on the liver image after noise reduction to obtain a local binary pattern map and a Canny edge detection map. As described above, in the evaluation of the degree of cirrhosis, more attention needs to be paid to the texture features of the liver surface, and in order to enhance the perception and extraction capability of the feature extractor based on the deep neural network model for the texture features, in the technical solution of the present application, first, the local binary patterning process and Canny edge detection are performed on the liver image after noise reduction to obtain a local binary pattern map and a Canny edge detection map. The specific principle is that 3 × 3 is used as a window unit, if the value of the surrounding pixel is greater than the value of the central pixel, the pixel is marked as 1, otherwise, the pixel is marked as 0, then the pixels in the neighborhood are binarized, and the obtained values are multiplied by the binary sequence correspondingly and added to obtain the LBP of the central pixel. Canny edge detection can preserve edge contours and remove other texture-independent image representations.
In the above intelligent evaluation system 100 for cirrhosis state based on images, the multi-channel merging module 140 is configured to merge the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image to obtain a multi-channel liver image. After the local binary pattern map and the Canny edge detection map are obtained, the local binary pattern map, the Canny edge detection map and the liver image after noise reduction are merged into 5 channels to serve as the input of the network, the data width of the network input end is expanded, the network can learn and express more abundant things, and the accuracy is improved.
In one example, in the above intelligent evaluation system 100 for cirrhosis status based on images, the multi-channel merging module 140 is further configured to merge the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image along a channel dimension to obtain the multi-channel liver image according to the following formula; wherein the formula is
F h =Concat[F 1 ,F 2 ,F 3 ] c
Wherein, F 1 As a local binary pattern map, F 2 For the Canny edge detection map, F 3 Concat [. For the de-noised liver image] c Representing the merge function along the channel dimension.
In the above-mentioned intelligent evaluation system 100 for cirrhosis state based on image, the dual-stream coding module 150 is configured to pass the multichannel liver image through a dual-stream network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel having a first size, and the second convolutional neural network uses a second convolutional kernel having a second size, and the first size is different from the second size. That is, the multi-channel liver image is passed through a dual-flow network structure having two image feature extractors, wherein the two image feature extractors can perform feature perception and filtering on the multi-channel liver image with different receptive fields to obtain a classification feature map.
Particularly, in the technical solution of the present application, on one hand, when the feature extractor based on the convolutional neural network model extracts image features, the shallow features are more features such as lines, textures, shapes, and the like, and the deep features are more abstract features representing objects; on the other hand, texture features are present in different image regions, and therefore, in the technical solution of the present application, on one hand, the first convolutional neural network and the second convolutional neural network use convolution kernels with different sizes so that the first convolutional neural network and the second convolutional neural network have different feature receptive fields, and on the other hand, feature maps (including a depth feature map and a shallow feature map) are extracted from different layers of the first convolutional neural network and the second convolutional neural network, and feature maps of corresponding depths are fused to obtain a plurality of fused feature maps as classification feature maps.
Fig. 3 illustrates a block diagram of a dual-stream encoding module in an image-based intelligent assessment system for cirrhosis state according to an embodiment of the present application. As shown in fig. 3, in the above intelligent evaluation system 100 for liver cirrhosis status based on images, the dual-stream encoding module 150 includes: a first deep convolutional coding unit 151, configured to perform convolutional coding on the multi-channel liver image with the first convolutional core using the first convolutional neural network to extract first feature maps from each layer of the first convolutional neural network respectively to obtain a plurality of first feature maps; a second deep convolutional encoding unit 152, configured to perform convolutional encoding on the multi-channel liver image with the second convolutional core using the second convolutional neural network to extract a second feature map from each layer of the second convolutional neural network respectively to obtain a plurality of second feature maps; and the dense connection unit 153 is configured to fuse the first feature maps and the second feature maps of the plurality of first feature maps and the plurality of second feature maps at the depths corresponding to each group, respectively, to obtain a plurality of fused feature maps.
In an example, in the above intelligent evaluation system 100 for liver cirrhosis status based on image, the first deep convolutional coding unit 151 is further configured to: performing convolution processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using layers of the first convolutional neural network to extract first feature maps from the layers of the first convolutional neural network respectively to obtain a plurality of first feature maps, wherein an input of the first layer of the first convolutional neural network is the multichannel liver image, and the first convolutional neural network uses a first convolution kernel having a first size.
In one example, in the above intelligent evaluation system 100 for cirrhosis state based on image, the second deep convolutional coding unit 152 is further configured to perform convolution processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using layers of the second convolutional neural network to extract second feature maps from the layers of the second convolutional neural network respectively to obtain a plurality of second feature maps, wherein an input of a first layer of the second convolutional neural network is the multi-channel liver image, and the second convolutional neural network uses a second convolution kernel having a second size, and the first size is different from the second size.
As shown in fig. 3, in the above intelligent evaluation system 100 for liver cirrhosis status based on images, the dual-stream encoding module 150 further includes: an optimization unit 154, configured to calculate channel compression-excitation optimization factors of depth recursion of each fused feature map in the plurality of fused feature maps, respectively, based on statistical features of the feature value sets of all positions of each fused feature map; and a fusion unit 155, configured to take the depth-recursive channel squeeze-excitation optimization factor of each fusion feature map as a weight, and respectively weight each fusion feature map in the multiple fusion feature maps to obtain multiple weighted fusion feature maps as the classification feature map.
In particular, in the technical solution of the present application, as for the plurality of fused feature maps, since the plurality of fused feature maps are obtained by fusing a pair of the first feature map and the second feature map at the depths corresponding to the plurality of first feature maps and the second feature map, when the plurality of fused feature maps are classified by the classifier, it is desirable that the plurality of fused feature maps have high expression consistency in the depth dimension to improve the classification accuracy. Thus, for each fused feature map, denoted F for example, a channel squeeze-excitation optimization factor is calculated for its depth recursion.
In an example, in the above intelligent evaluation system 100 for cirrhosis state based on images, the optimization unit 154 is further configured to: calculating the optimization factors corresponding to the feature values of all positions of all the fused feature maps in the plurality of fused feature maps according to the following formula based on the mean value and the variance of the feature value sets of all the positions of all the fused feature maps in the plurality of fused feature maps; wherein the formula is:
Figure BDA0003833556000000131
wherein v is i Representing the plurality of fusion featuresAnd μ and σ represent the mean and variance of the feature value sets at all positions of each of the plurality of fused feature maps, respectively, and exp (-) represents an exponential operation of the feature values, which represents calculation of a natural exponential function value raised to the power of the feature value.
The depth-recursive squeeze-excitation optimization factor activates depth recursion of feature distribution based on statistical characteristics of feature sets in the depth direction to infer the distribution of features at each sampling depth thereof, and obtains an attention-enhanced depth confidence value using a squeeze-excitation mechanism composed of a ReLU-Sigmoid function, whereby expression consistency of the plurality of fused feature maps in the depth dimension can be improved by weighting the plurality of fused feature maps with the same as a weighting coefficient. Thus, the accuracy of evaluation of the degree of cirrhosis is improved.
In the above intelligent evaluation system 100 for cirrhosis state based on images, the evaluation result generation module 160 is configured to pass the classification feature map through a classifier to obtain a classification result, where the classification result is a grade label of cirrhosis degree of a patient to be detected.
In an example, in the above intelligent evaluation system 100 for liver cirrhosis status based on images, the evaluation result generation module is further configured to: processing the classification feature map using the classifier to generate a classification result in accordance with the following formula:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project(F)}
wherein Project (F) represents projecting the classification matrix map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the layers of the fully connected layer.
In summary, the intelligent evaluation system 100 for cirrhosis state based on images according to the embodiment of the present application is illustrated, which uses two image feature extractors to perform feature perception and filtering on liver images with different receptive fields, and in particular, the noise reduction module based on an automatic encoder filters out the noise of liver images, and uses local binary patterning and Canny edge detection to enhance the perception and extraction capability of the feature extractor based on a deep neural network model on texture features, so as to improve the accuracy of cirrhosis degree evaluation.
As described above, the intelligent evaluation system 100 for liver cirrhosis state based on images according to the embodiment of the present application may be implemented in various terminal devices, such as a server having intelligent evaluation of liver cirrhosis state based on images, and the like. In one example, the intelligent evaluation system 100 for liver cirrhosis state based on images according to the embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the image-based liver cirrhosis state intelligent evaluation system 100 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the intelligent evaluation system 100 for cirrhosis state based on image can also be one of many hardware modules of the terminal device.
Alternatively, in another example, the image-based intelligent evaluation system 100 for liver cirrhosis status and the terminal device may be separate devices, and the image-based intelligent evaluation system 100 for liver cirrhosis status may be connected to the terminal device through a wired and/or wireless network and transmit interactive information according to an agreed data format.
Exemplary method
According to another aspect of the application, an intelligent evaluation method for the cirrhosis state based on the image is further provided. As shown in fig. 4, the intelligent evaluation method for cirrhosis state based on image according to the embodiment of the present application includes the steps of: s110, acquiring a liver image of a patient to be detected; s120, the liver image of the patient to be detected passes through a noise reduction module based on an automatic encoder to obtain a noise-reduced liver image; s130, carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction to obtain a local binary pattern image and a Canny edge detection image; s140, merging the local binary pattern image, the Canny edge detection image and the denoised liver image to obtain a multi-channel liver image; s150, enabling the multichannel liver image to pass through a double-current network structure comprising a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel with a first size, the second convolutional neural network uses a second convolutional kernel with a second size, and the first size is different from the second size; and S160, passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is a liver cirrhosis degree grade label of the patient to be detected.
In an example, in the above intelligent evaluation method for cirrhosis state based on images, the passing the liver image of the patient to be detected through an automatic encoder-based noise reduction module to obtain a noise-reduced liver image includes: inputting the liver image of the patient to be detected into an encoder of the noise reduction module, wherein the encoder uses a convolutional layer to perform explicit spatial coding on the liver image of the patient to be detected so as to obtain image characteristics; and inputting the image features into a decoder of the noise reduction module, wherein the decoder performs deconvolution processing on the image features by using a deconvolution layer to obtain the noise-reduced liver image.
In one example, in the above intelligent evaluation method for cirrhosis state based on images, the merging the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image to obtain a multi-channel liver image further includes: merging the local binary pattern map, the Canny edge detection map and the denoised liver image along a channel dimension to obtain the multi-channel liver image according to the following formula; wherein the formula is F h =Concat[F 1 ,F 2 ,F 3 ] c Wherein F is 1 As a local binary pattern map, F 2 For the Canny edge detection map, F 3 Concat [. For the noise-reduced liver image] c Representing the merge function along the channel dimension.
In one example, in the above intelligent evaluation method for liver cirrhosis state based on image, the passing the multichannel liver image through a dual-flow network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map includes: performing convolutional encoding on the multichannel liver image by using the first convolutional neural network and the first convolutional core to respectively extract a first feature map from each layer of the first convolutional neural network to obtain a plurality of first feature maps; convolution coding the multi-channel liver image with the second convolution kernel using the second convolution neural network to extract second feature maps from respective layers of the second convolution neural network to obtain a plurality of second feature maps; and respectively fusing the first feature maps and the second feature maps of each group with corresponding depth in the plurality of first feature maps and the plurality of second feature maps to obtain a plurality of fused feature maps.
In one example, in the above intelligent evaluation method for cirrhosis state based on an image, the convolutional-coding the multichannel liver image with the first convolutional core using the first convolutional neural network to extract first feature maps from respective layers of the first convolutional neural network to obtain a plurality of first feature maps, further includes: performing convolution processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using the layers of the first convolutional neural network to extract first feature maps from the layers of the first convolutional neural network respectively to obtain a plurality of first feature maps, wherein an input of the first layer of the first convolutional neural network is the multi-channel liver image, and the first convolutional neural network uses a first convolution kernel having a first size.
In one example, in the above intelligent evaluation method for liver cirrhosis state based on image, the using the second convolutional neural network to convolutionally encode the multi-channel liver image by the second convolutional kernel to extract second feature maps from respective layers of the second convolutional neural network to obtain a plurality of second feature maps further includes performing convolution processing on input data in forward pass of layers using the layers of the second convolutional neural network, pooling processing along a feature matrix, and activation processing to extract second feature maps from the respective layers of the second convolutional neural network to obtain a plurality of second feature maps, wherein an input of a first layer of the second convolutional neural network is the multi-channel liver image, and the second convolutional neural network uses a second convolutional kernel having a second size, the first size being different from the second size.
In an example, in the above method for intelligently estimating a cirrhosis state based on an image, the passing the multichannel liver image through a dual-flow network structure including a first convolutional neural network and a second convolutional neural network to obtain a classification feature map further includes: respectively calculating channel squeezing-excitation optimization factors of depth recursion of each fused feature map based on the statistical features of the feature value sets of all positions of each fused feature map in the plurality of fused feature maps; and weighting each fused feature map in the plurality of fused feature maps respectively by taking the channel squeezing-excitation optimization factors of the depth recursion of each fused feature map as weights to obtain a plurality of weighted fused feature maps as the classification feature maps.
In one example, in the above intelligent evaluation method for liver cirrhosis state based on image, the calculating a depth-recursive channel squeeze-excitation optimization factor for each fused feature map based on statistical features of feature value sets of all positions of each fused feature map in the plurality of fused feature maps further includes: calculating the optimization factors corresponding to the feature values of all positions of all the fused feature maps in the plurality of fused feature maps according to the following formula based on the mean value and the variance of the feature value sets of all the positions of all the fused feature maps in the plurality of fused feature maps; wherein the formula is:
Figure BDA0003833556000000161
wherein v is i A first part representing each of the plurality of fused feature mapsAnd mu and sigma respectively represent the mean value and the variance of the feature value set of all the positions of each fused feature map in the fused feature maps, exp (-) represents the exponential operation of the feature value, and the exponential operation of the feature value represents the calculation of the function value of the natural exponent taking the feature value as the power.
In an example, in the above intelligent evaluation method for liver cirrhosis state based on image, the passing the classification feature map through a classifier to obtain a classification result, where the classification result is a liver cirrhosis degree grade label of a patient to be detected, further includes: processing the classification feature map using the classifier to generate a classification result according to the following formula:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project(F)}
wherein Project (F) represents projecting the classification matrix map as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
In summary, the intelligent evaluation method for liver cirrhosis state based on image according to the embodiment of the present application is illustrated, which uses two image feature extractors to perform feature perception and filtering on liver images with different receptive fields, and in particular, the noise reduction module based on an automatic encoder filters the noise of the liver images, and uses local binary patterning processing and Canny edge detection to enhance the perception and extraction capability of the feature extractor based on a deep neural network model on texture features, so as to improve the accuracy of liver cirrhosis degree evaluation.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 5.
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 5, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 11 to implement the functions of the image-based intelligent assessment method of cirrhosis state of the liver of various embodiments of the present application described above and/or other desired functions. Various contents such as a liver image may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may include, for example, a keyboard, a mouse, and the like.
The output device 14 can output various information including the classification result to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 5, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the functions of the intelligent evaluation method for liver cirrhosis state based on images according to various embodiments of the present application described in the "exemplary methods" section of this specification above.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages, for carrying out operations according to embodiments of the present application. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the functions in the intelligent evaluation method for liver cirrhosis state based on images according to various embodiments of the present application, described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An intelligent evaluation system for cirrhosis state based on images is characterized by comprising:
the image acquisition module is used for acquiring a liver image of a patient to be detected;
the image denoising module is used for enabling the liver image of the patient to be detected to pass through the denoising module based on the automatic encoder so as to obtain a denoised liver image;
the texture feature extraction module is used for carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction so as to obtain a local binary pattern image and a Canny edge detection image;
a multi-channel merging module, configured to merge the local binary pattern map, the Canny edge detection map, and the noise-reduced liver image to obtain a multi-channel liver image;
a dual-stream encoding module for passing the multi-channel liver image through a dual-stream network structure comprising a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel having a first size, and the second convolutional neural network uses a second convolutional kernel having a second size, and the first size is different from the second size; and
and the evaluation result generation module is used for enabling the classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is a liver cirrhosis degree grade label of the patient to be detected.
2. The intelligent image-based cirrhosis state assessment system according to claim 1, wherein the image denoising module comprises:
the convolutional coding unit is used for inputting the liver image of the patient to be detected into the encoder of the noise reduction module, wherein the encoder uses a convolutional layer to perform explicit spatial coding on the liver image of the patient to be detected so as to obtain image characteristics; and
and the deconvolution coding unit is used for inputting the image characteristics into a decoder of the noise reduction module, wherein the decoder uses a deconvolution layer to perform deconvolution processing on the image characteristics so as to obtain the noise-reduced liver image.
3. The image-based intelligent assessment system for liver cirrhosis status according to claim 2, wherein the multi-channel merging module is further configured to merge the local binary pattern map, the Canny edge detection map and the noise-reduced liver image along a channel dimension to obtain the multi-channel liver image according to the following formula;
wherein the formula is F h =Concat[F 1 ,F 2 ,F 3 ] c Wherein F is 1 As a local binary pattern map, F 2 For the Canny edge detection map, F 3 Concat [. For the noise-reduced liver image] c Representing the merge function along the channel dimension.
4. The intelligent image-based cirrhosis status assessment system according to claim 3, wherein the dual-stream encoding module comprises:
a first depth convolutional coding unit, configured to perform convolutional coding on the multi-channel liver image by using the first convolutional neural network with the first convolutional core, so as to extract first feature maps from each layer of the first convolutional neural network respectively to obtain a plurality of first feature maps;
a second depth convolution coding unit, configured to perform convolution coding on the multi-channel liver image by using the second convolution neural network and the second convolution kernel so as to extract a second feature map from each layer of the second convolution neural network respectively to obtain a plurality of second feature maps; and
and the dense connection unit is used for fusing the first feature maps and the second feature maps with corresponding depths in each group in the plurality of first feature maps and the plurality of second feature maps respectively to obtain a plurality of fused feature maps.
5. The intelligent image-based liver cirrhosis state assessment system according to claim 4, wherein the first deep convolutional encoding unit is further configured to:
performing convolution processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using the layers of the first convolutional neural network to extract first feature maps from the layers of the first convolutional neural network respectively to obtain a plurality of first feature maps, wherein an input of the first layer of the first convolutional neural network is the multi-channel liver image, and the first convolutional neural network uses a first convolution kernel having a first size.
6. The intelligent image-based liver cirrhosis state assessment system according to claim 5, wherein the second deep convolutional encoding unit is further configured to:
performing convolution processing, pooling processing along a feature matrix, and activation processing on input data in forward pass of layers using layers of the second convolutional neural network to extract second feature maps from the layers of the second convolutional neural network, respectively, to obtain a plurality of second feature maps, wherein an input of a first layer of the second convolutional neural network is the multi-channel liver image, and the second convolutional neural network uses a second convolution kernel having a second size, the first size being different from the second size.
7. The intelligent image-based cirrhosis status assessment system according to claim 6, wherein the dual-stream encoding module further comprises:
the optimization unit is used for respectively calculating channel squeezing-excitation optimization factors of depth recursion of each fused feature map based on the statistical features of the feature value set of all positions of each fused feature map in the plurality of fused feature maps; and
and the fusion unit is used for weighting each fusion feature map in the fusion feature maps by taking the depth recursive channel squeezing-excitation optimization factor of each fusion feature map as weight so as to obtain a plurality of weighted fusion feature maps serving as the classification feature map.
8. The intelligent image-based cirrhosis status assessment system according to claim 7, wherein the optimization unit is further configured to:
calculating the optimization factors corresponding to the feature values of all positions of all the fused feature maps in the plurality of fused feature maps according to the following formula based on the mean value and the variance of the feature value sets of all the positions of all the fused feature maps in the plurality of fused feature maps;
wherein the formula is:
Figure FDA0003833555990000031
wherein v is i And the index operation of the characteristic value represents the calculation of a natural index function value taking the characteristic value as power.
9. The intelligent image-based cirrhosis state assessment system of claim 8, wherein the assessment result generation module is further configured to:
processing the classification feature map using the classifier to generate a classification result according to the following formula:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project(F)}
wherein Project (F) represents projecting the classification matrix map as a vector, W 1 To W n As a weight matrix for all connected layers of each layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
10. An intelligent liver cirrhosis state assessment method based on images is characterized by comprising the following steps:
acquiring a liver image of a patient to be detected;
enabling the liver image of the patient to be detected to pass through a noise reduction module based on an automatic encoder to obtain a noise-reduced liver image;
carrying out local binary patterning processing and Canny edge detection on the liver image subjected to noise reduction to obtain a local binary pattern image and a Canny edge detection image;
merging the local binary pattern image, the Canny edge detection image and the denoised liver image to obtain a multi-channel liver image;
passing the multichannel liver image through a dual-flow network structure comprising a first convolutional neural network and a second convolutional neural network to obtain a classification feature map, wherein the first convolutional neural network uses a first convolutional kernel with a first size, and the second convolutional neural network uses a second convolutional kernel with a second size, and the first size is different from the second size; and
and passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is a liver cirrhosis degree grade label of the patient to be detected.
CN202211081848.2A 2022-09-06 2022-09-06 Intelligent liver cirrhosis state evaluation system and method based on images Withdrawn CN115601299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211081848.2A CN115601299A (en) 2022-09-06 2022-09-06 Intelligent liver cirrhosis state evaluation system and method based on images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211081848.2A CN115601299A (en) 2022-09-06 2022-09-06 Intelligent liver cirrhosis state evaluation system and method based on images

Publications (1)

Publication Number Publication Date
CN115601299A true CN115601299A (en) 2023-01-13

Family

ID=84843956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211081848.2A Withdrawn CN115601299A (en) 2022-09-06 2022-09-06 Intelligent liver cirrhosis state evaluation system and method based on images

Country Status (1)

Country Link
CN (1) CN115601299A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168348A (en) * 2023-04-21 2023-05-26 成都睿瞳科技有限责任公司 Security monitoring method, system and storage medium based on image processing
CN116503672A (en) * 2023-06-26 2023-07-28 首都医科大学附属北京佑安医院 Liver tumor classification method, system and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168348A (en) * 2023-04-21 2023-05-26 成都睿瞳科技有限责任公司 Security monitoring method, system and storage medium based on image processing
CN116168348B (en) * 2023-04-21 2024-01-30 成都睿瞳科技有限责任公司 Security monitoring method, system and storage medium based on image processing
CN116503672A (en) * 2023-06-26 2023-07-28 首都医科大学附属北京佑安医院 Liver tumor classification method, system and storage medium
CN116503672B (en) * 2023-06-26 2023-08-25 首都医科大学附属北京佑安医院 Liver tumor classification method, system and storage medium

Similar Documents

Publication Publication Date Title
Khan et al. Lungs nodule detection framework from computed tomography images using support vector machine
CN110111313B (en) Medical image detection method based on deep learning and related equipment
CN111524106B (en) Skull fracture detection and model training method, device, equipment and storage medium
CN112132959B (en) Digital rock core image processing method and device, computer equipment and storage medium
Isasi et al. Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms
CN115601299A (en) Intelligent liver cirrhosis state evaluation system and method based on images
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN111667459B (en) Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
Fan et al. GFNet: Automatic segmentation of COVID-19 lung infection regions using CT images based on boundary features
CN113724185B (en) Model processing method, device and storage medium for image classification
WO2023207743A1 (en) Image detection method and apparatus, and computer device, storage medium and program product
Yamanakkanavar et al. MF2-Net: A multipath feature fusion network for medical image segmentation
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
Zhou et al. Adaptive weighted locality-constrained sparse coding for glaucoma diagnosis
CN115100494A (en) Identification method, device and equipment of focus image and readable storage medium
CN116503330A (en) Melanoma skin disease detection method and system based on boundary guided transducer
Zhang et al. Medical image fusion based on quasi-cross bilateral filtering
Zhao et al. A survey of dictionary learning in medical image analysis and its application for glaucoma diagnosis
Queiroz et al. Endoscopy image restoration: A study of the kernel estimation from specular highlights
CN113554640A (en) AI model training method, use method, computer device and storage medium
Chao et al. Instance-aware image dehazing
CN115100731B (en) Quality evaluation model training method and device, electronic equipment and storage medium
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
WO2022227193A1 (en) Liver region segmentation method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230113

WW01 Invention patent application withdrawn after publication