CN113780421B - Brain PET image identification method based on artificial intelligence - Google Patents

Brain PET image identification method based on artificial intelligence Download PDF

Info

Publication number
CN113780421B
CN113780421B CN202111065379.0A CN202111065379A CN113780421B CN 113780421 B CN113780421 B CN 113780421B CN 202111065379 A CN202111065379 A CN 202111065379A CN 113780421 B CN113780421 B CN 113780421B
Authority
CN
China
Prior art keywords
image
pet
brain
sequence
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111065379.0A
Other languages
Chinese (zh)
Other versions
CN113780421A (en
Inventor
叶方全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Tianpeng Computer Technology Co ltd
Original Assignee
Guangzhou Tianpeng Computer Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Tianpeng Computer Technology Co ltd filed Critical Guangzhou Tianpeng Computer Technology Co ltd
Priority to CN202111065379.0A priority Critical patent/CN113780421B/en
Publication of CN113780421A publication Critical patent/CN113780421A/en
Application granted granted Critical
Publication of CN113780421B publication Critical patent/CN113780421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention provides a brain PET image identification method based on artificial intelligence, which comprises the following steps: receiving a sequence of PET images of a brain of a target patient; the PET image sequence is analyzed to identify a plurality of images, a multi-channel image of the PET image sequence is created, and a classification of a lesion of the PET image sequence of the multi-channel image is computed. The invention provides an artificial intelligence-based brain PET image recognition method, which is beneficial to summarizing the rules of the characteristics of brain nodule imaging by automatically learning and extracting characteristic information, achieves higher detection rate, obtains a more accurate three-dimensional model by segmentation and is beneficial to brain nodule lesion recognition and accurate diagnosis of doctors.

Description

Brain PET image identification method based on artificial intelligence
Technical Field
The invention relates to data mining, in particular to a brain PET image identification method based on artificial intelligence.
Background
The computer-aided diagnosis has great promotion effect on the aspects of improving the diagnosis accuracy, improving the working efficiency, reducing missed diagnosis and the like. With the development of computer technology and artificial intelligence technology, computer-aided diagnosis is also moving towards intellectualization. The detection and identification of brain nodules are of great significance for the diagnosis of early brain tumors. Brain nodules are a general term for small lesions, high-density shadows in some PET images. The appearance of brain nodules in imaging is very complex. Various image algorithms have been applied to brain nodule detection and segmentation, such as thresholding, morphological algorithms, active contour methods, and nonlinear regression. In recent years, researchers have proposed some deep learning models for brain nodule detection and segmentation, which have significantly improved effects compared with the previous methods, but also face the following problems: the two-dimensional network cannot well utilize three-dimensional shape and texture information, and the three-dimensional boundary is difficult to be accurately segmented; brain region images and nodule features are of high complexity and difficult to distinguish between nodules and other similar tissue.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an artificial intelligence-based brain PET image identification method, which comprises the following steps:
receiving a sequence of PET images of a brain of a target patient;
identifying a plurality of images from the sequence of PET images, the plurality of images including a base image, a peak intensity enhanced image, an initial captured image, and a delayed response image; the base image represents a sequence of non-gray PET images, the peak gray-scale enhanced image represents an image with the highest relative brightness value, the initial captured image represents an initially detected gray-scale image in the sequence of PET images, and the delayed response image represents an end portion of the sequence of PET images, i.e., the last image over a predefined time;
creating a multi-channel image of a PET image sequence, wherein the multi-channel image comprises a brightness channel, a gray level updating channel and a gray level clearing channel, the brightness channel comprises the peak gray level enhanced image, the gray level updating channel is an operation difference value between the peak gray level enhanced image and a basic image, and the gray level clearing channel is an operation difference value between the initial shooting image and the delayed response image;
wherein the analysis is performed by calculating a score image by assigning a score to each pixel according to a significant value above a threshold within a region around the pixel, and applying a non-maximum suppression to the score image to obtain a binary detection mask comprising candidate regions representing local maximum locations;
the method further comprises the following steps: cropping the candidate regions from the image and resizing each cropped candidate region according to an input of the depth RNN, wherein the depth RNN computes a classification representing a lesion of each candidate region;
performing wavelet transformation on the denoised PET brain image to obtain high-frequency information in the PET image; dividing the PET brain image into a plurality of regions through lifting tree decomposition, and respectively processing each local region; if each local region is phi x phi, the sampling angle is set to phi21, i.e. projection angle u pi/phi2-1, wherein u is 1, 2, …, L2-1;
Constructing a phi multiplied by phi window with the same size as the sub-region, and calculating the orthogonal projection eta of the region on the sampling angleu(i):
ηu(i)=-x(i)*(sinu)+y(i)*(cosu)
u is the projection angle, x (i), y (i) are the window coordinates, and the bending coefficient eta is obtained by projection for each angled
Calculating a gradient vector field indicating the direction of change of the region at each point in the PET image; to etadPerforming wavelet transform to obtain transform coefficient { epsilonkH, predetermining a threshold value T, and aligning epsilonkCarrying out thresholding:
εk’(x)=0 |x|≤T
εk’(x)=εk(x) |x|>T
after the thresholdingInverse wavelet transform to obtain etadIs approximated by a signal RdFor all projection angles u, η can be madedAnd RdThe angle with the minimum difference is used as the optimal gradient vector field direction of the region;
u’=argmin||ηd-Rd||2,ζ<H,u∈[0,Φ2];
ζ=min||ηd-Rd||2
h is a threshold to determine whether a gradient vector field exists in the region.
Preferably, the method further comprises:
a patch saliency histogram is computed for respective images of the sequence of PET images, and wherein the method further comprises creating a single patch saliency map by combining a plurality of the patch saliency histograms, wherein the patch saliency channel stores the single patch saliency map.
Preferably, the depth RNN outputs a binary detection map comprising candidate regions calculated using a classification representing a lesion of each candidate region.
Preferably, the method further comprises summing the values of the binary detection map generated for each image of the sequence of PET images along the longitudinal axis to generate a projection thermodynamic map representing the spatial density of the candidate region.
Preferably, each of a plurality of transverse saliency histograms is computed for a respective image of the sequence of PET images, wherein the method further comprises creating a single transverse saliency thermal map by combining the plurality of transverse saliency histograms, wherein the transverse saliency channel stores the single transverse saliency thermal map.
Preferably, the transverse saliency histogram is computed by computing a side-to-side patch flow from a flow field between patches of the left and right brain for identifying, for each patch of the brain, a corresponding most adjacent patch in the symmetric part of the brain, wherein the transverse saliency value of the transverse saliency histogram of each patch is estimated from the error of the most adjacent patch.
Compared with the prior art, the invention has the following advantages:
the invention provides an artificial intelligence-based brain PET image recognition method, which is beneficial to summarizing the rules of the characteristics of brain nodule imaging by automatically learning and extracting characteristic information, achieves higher detection rate, obtains a more accurate three-dimensional model by segmentation and is beneficial to brain nodule lesion recognition and accurate diagnosis of doctors.
Drawings
Fig. 1 is a flowchart of an artificial intelligence-based brain PET image recognition method according to an embodiment of the present invention.
Detailed Description
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details.
The invention provides a brain PET image identification method based on artificial intelligence. Fig. 1 is a flowchart of a brain PET image recognition method based on artificial intelligence according to an embodiment of the invention.
The PET image sequence of the invention comprises at least a portion of the brain of the target patient. The present invention trains the depth RNN for each patient receiving a sequence of PET images. The depth RNN is trained from the multi-channel images and associated renderings and associated labels. Due to the lack of available training data sets, and because such machine learning methods are typically used for classification of natural images, rather than different medical images, standard automatic machine learning methods that rely on large data sets cannot be used. The present invention provides a solution to the problem of providing accurate classification results when a large training data set is not available. Due to the small size of the training set, and the use of a multi-channel image data structure, the neural network is trained relatively quickly, without sacrificing the accuracy of lesion computation.
A multi-channel image is created for a sequence of PET images. The PET image sequence is preprocessed before receiving a multi-channel image computed from the PET image sequence using a trained recurrent neural network. Preprocessing includes the process of segmenting the brain tissue from the sequence images. For example, registering the series of PET images along the longitudinal axis, with registration to achieve accurate superposition of the images. By regularizing the luminance values of the PET images and setting the standard deviation of the luminance values as a normalization of the unit luminance values, a relative measure between the images is defined. The regularization is performed, for example, by calculating a total luminance value for each image, an average luminance value for each image, a regularization luminance value for each image, or a relative luminance value for each image. The values of the regularized images are analyzed to identify images from the sequence for computing channels of the multi-channel representation.
For PET brain tissue image segmentation, only the brain boundary neighboring region needs to be analyzed, and the specific steps are as follows:
and performing wavelet transformation on the denoised PET brain image to obtain high-frequency information in the PET image. The PET brain image is divided into a plurality of regions by lifting tree decomposition, and each local region is processed respectively. Assuming each local region size is Φ × Φ, the sampleable angle is set to Φ21, i.e. projection angle u π/φ2-1, wherein u is 1, 2, …, L2-1。
A Φ × Φ window having the same size as the sub-region is constructed, and the orthogonal projection of the region at the sampling angle is calculated.
ηu(i)=-x(i)*(sinu)+y(i)*(cosu)
Wherein u is a projection angle, x (i), y (i) are window coordinates, and a bending coefficient eta is obtained by projecting for each angled
A gradient vector field is calculated that indicates the direction of change of the region at each point in the PET image. To etadPerforming wavelet transform to obtain transform coefficient called { epsilonkH, predetermining a threshold value T, and aligning epsilonkCarrying out thresholding:
εk’(x)=0 |x|≤T
εk’(x)=εk(x) |x|>T
after thresholding, it is subjected to an inverse wavelet transform to obtain ηdApproximation signal R ofdFor all projection angles u, η can be madedAnd RdThe angle at which the difference is the smallest is taken as the optimal gradient vector field direction for that region.
u’=argmin||ηd-Rd||2,ζ<H,u∈[0,Φ2]
ζ=min||ηd-Rd||2
H is a threshold to determine whether a gradient vector field exists in the region.
In order to simplify the calculation complexity of the algorithm and reduce the number of target regions, adjacent regions with similar gradient vector field characteristics in the lifting tree region are combined together to construct a new PET brain segmentation target region:
1. calculating the optimal gradient vector field direction u' and the reconstruction error zeta of all the blocks;
2. calculating the optimal gradient vector field direction u for a region omega of width 2 phid'and reconstruction error ζ', and region Ω four sub-regions Ω14Respectively has a reconstruction error of ζ1234If ζ' ═ ζ1234Then merge Ω14
3. Repeating steps 1 and 2 until the maximum blocking area is reached.
The region where the gradient vector field is present is finally taken as the target region of the PET brain image and further processed.
PET nodule image segmentation was accomplished using the following procedure: each target region is divided into two parts, namely a nodule part and a background part, and the nodule part is a connected region after being divided;
the richness of the gray values is represented by entropy values, which are defined as
Figure BDA0003258267850000061
Wherein W is the number of gray levels contained in the target region, PiProbability of occurrence of a pixel at gray level i in a PET sub-image.
Calculating the mean gradient
Figure BDA0003258267850000062
Wherein
Figure BDA0003258267850000063
Gx(x,y)=2f(x+2,y)+f(x+l,y)-f(x-1,y)-2f(x-2,y)
Gy(x,y)=2f(x,y+2)+f(x,y+1)-f(x,y-1)-2f(x,y-2)
D is the divided region, NDCalculating the number of pixels in the region for gradient calculation; f is the pixel value of the corresponding position of the PET brain image.
Calculating the nodule region after PET image segmentation:
DF=argmin[weight1WD1+(l-weightl)WTF-D1+weight2GD1+(1-weight2)GTF-D2]
and after the brain information segmentation of the region is completed, the segmentation results of all target regions are fused to realize the segmentation processing of the PET brain image.
In the formula, TF represents the whole brain image, weight1And weight2Are the above functions U and G respectivelyDWeight values in the segmentation algorithm.
Wherein a sequence of PET images is analyzed to identify a plurality of images including a base image, a peak intensity enhanced image, an initial captured image, and a delayed response image. The basic image is a PET image sequence without contrast information. For example, the base image may be identified as the first image of the sequence and the image associated with the lowest relative luminance value. Peak grayscale enhancement of a PET image sequence. The peak grayscale enhanced image is identified as, for example, the image having the highest relative luminance value, or the peak of the generated pattern. The gray scale initially detectable in the sequence of PET images of the initial captured image. The initial captured image is identified as an image having a luminance value higher than that of the base image according to a threshold value excluding a luminance change due to noise or artifact. An end portion of an image of the sequence of PET images is delayed in response to the image. The delayed response image may be the last image in the sequence that has passed a certain time.
A trained Recurrent Neural Network (RNN) receives as input the multi-channel images and computes an output representing a classification of the lesion. For example, the output may include one of the following classifications: malignant lesions, benign lesions and normal tissue.
Optionally, the sequence of PET images is extracted from the three-dimensional image data as two-dimensional slices. Deep RNN analyzes each two-dimensional slice sequence. Or the PET image sequence comprises 3D images. A patch saliency histogram may then also be computed for the plurality of images. A patch saliency histogram is computed from the LP distance between each patch of the image and the average patch of the principal components along the image patch. The patch saliency histogram is represented as an additional channel of the multi-channel image. The patch discriminative saliency histogram is analyzed to identify candidate regions that include a relatively high density of saliency values. The candidate regions are cropped and fed to the depth RNN for calculating lesions for each candidate region. The candidate regions may be cropped from the input of the luminance channel, the grayscale updating channel, and the grayscale clearing channel to create a multi-channel representation of each candidate region. The deep RNN may output a binary detection map that includes candidate regions classified with a classification representative of a lesion. The candidate regions classified as lesion representations represent the locations of the lesions. The binary detection maps of the patch discriminative significance histogram calculation are combined by an or operation. The binary detection maps are summed together to generate a projected thermodynamic map representing the spatial density of the candidate region, by which the location of the detected lesion is represented.
Wherein for the average patch, it may be computed from the LP distance and the principal components of the image patch along the particular image.
For a vectorized patch p around a point (x, y)x,yThe degree of distinction of a patch is expressed in the formula:
PD(px,y)=∑k=1 n|px,yωk T|
wherein:
PD represents the degree of distinction of the patch, and n is the number of components;
px,yrepresenting the vectorized patch around point (x, y),
ωk Trepresenting the kth principal component of the overall image patch distribution.
Optionally, for each image of the sequence, a patch saliency histogram is computed. The patch discriminative saliency histograms are then summed to create a thermodynamic diagram representing the degree of saliency.
The patch saliency map may be fed as an input to the depth RNN as an additional channel of a multi-channel representation. The patch discriminative saliency thermodynamic map represents the location of a detected lesion, e.g., the peak intensity point of the thermodynamic map associated with a multi-channel image of a lesion or tumor may represent the location of the lesion or tumor.
In another embodiment, a lateral saliency histogram is computed. For each patch of the brain, the transverse saliency includes the corresponding most adjacent patch in the symmetric region of the brain. The transverse saliency histogram stores transverse saliency values representing each patch. The transverse saliency histogram is calculated by LP distances between a patch of a brain of a certain image of a PET image sequence and corresponding patches of symmetric parts of the brain of the same image, and also by calculating a contralateral patch flow from a flow field between patches of the left and right brain. The flow field identifies for each patch of the brain the corresponding most adjacent patch in the symmetric portion of the brain.
The dense cluster field for each pixel is computed taking into account the k × k patches surrounding the respective pixel. For each pixel location denoted (x, y), a random displacement vector (denoted T) is assigned. The random displacement vectors mark the positions of corresponding patches in the symmetric part of the brain. Based on the calculated distance (e.g., LP distance). The quality of the displacement vector T can be calculated according to the following mathematical relationship:
Figure BDA0003258267850000091
wherein:
k denotes the size of the patch around each pixel, px,yRepresenting the corresponding pixel at coordinate (x, y), T representing a displacement vector, I representing a patch, d representing a quality metric.
And adjusting the displacement of a certain patch of the brain according to the displacement vectors of the adjacent patches of the symmetrical part of the brain. The adjustment of the displacement is generated by displacement vectors of adjacent patches in the same image. After the displacement adjustment is carried out on the image, the steps of distribution of random displacement vectors and displacement adjustment are carried out for multiple times in an iterative mode, and the position of the best corresponding patch of the brain symmetric part is determined according to the LP distance.
The transverse saliency value of the transverse saliency histogram of each patch is estimated from the errors of the most adjacent patches. The nearest neighbor error (denoted as NHE) may be calculated based on the following mathematical relationship:
NHE(px,y)=minTD(px,y,px+Tx,y+Ty)
px,yrepresenting the corresponding pixel at coordinate (x, y), t representing the displacement vector, d representing the quality metric. NHE denotes the most adjacent error metric.
Optionally, a transverse saliency histogram is computed for each image of the sequence. The transverse saliency histograms may be summed to create a thermodynamic diagram having values representing degrees of saliency. The transverse saliency heat map is fed as an input to the depth RNN as an additional channel of the multi-channel representation. The laterally significant thermodynamic map may represent the location of a detected lesion, e.g., a peak intensity point of the thermodynamic map associated with a representation of a lesion or tumor may represent the location of the lesion or tumor.
A patch discriminative significance histogram of the plurality of images, or a lateral significance histogram of the plurality of images, is analyzed to identify candidate regions comprising a relatively high density of significance values. The candidate regions are bounded by bounding boxes. The bounding box ensures that the entire lesion is included in the cropped image. The extracted lesion image may be resized according to the input of the depth RNN. For a composition consisting ofi,hj) Represented window size for a given range and a set of thresholds t1,t2,...tnW at and around each pixel (x, y)i×hjRegion of size sx,yIn (1), the following scores were evaluated:
Figure BDA0003258267850000101
where count (x, y) represents the score calculated for the (x, y) pixel, a score image is generated from a set of scores calculated for each pixel. Non-maximum suppression is applied to the score image to obtain the location of the local maximum.
Optionally, the candidate region is cropped from the image. Each cropped candidate region is resized according to the input of the depth RNN. The depth RNN calculation represents the classification of the lesion for each candidate region. As described above, the cropped candidate regions may be referred to as a patch saliency histogram or a lateral saliency histogram of the respective channels fed into the multi-channel image.
When the candidate region is cropped, each channel of the multi-channel image includes a region corresponding to the candidate region. The multi-channel image includes at least the following three channels:
a luminance channel comprising a peak grayscale enhanced image.
And the gray level updating channel comprises the operation difference value between the peak gray level enhanced image and the basic image.
And the gray clearing channel comprises an operation difference value between the initial shooting image and the delayed response image.
The channels in the multi-channel image are computed from a series of axial PET imaging images.
The multi-channel image may include the following channels:
a patch saliency path comprising a patch saliency histogram, such as a patch saliency heat map, or a candidate region.
A lateral saliency channel comprising a lateral saliency histogram, a lateral saliency thermodynamic map, or a candidate region.
The multi-channel images are provided as input to a trained depth RNN for computing a plurality of classifications representative of lesions. Optionally, the trained depth RNN includes 9 convolutional layers in three consecutive patches. The first module may include two 4 x 32 filters with a ReLU layer, and a max pooling layer. The second module may include four 4 x 32 filters. The third module may include three convolutional layers of sizes 4 × 4 × 72, 6 × 6 × 72, and 3 × 3 × 72, with one ReLU behind each of the three convolutional layers. The trained deep RNN may include a fully connected layer with a number of neurons and a softmax loss layer.
The depth RNN output comprises a binary detection map of candidate regions computed using a classification representing the lesion of each candidate region. The values of the binary detection map generated for each image of the sequence of PET images are summed along the vertical axis to generate a projected thermodynamic map representing the spatial density of the candidate region.
The PET image sequence is acquired in the form of 3D image data. A set of 2D image slices is acquired from each sequence. Preprocessing a sequence or an image identifying a channel for calculating a multi-channel image, then calculating a patch discrimination significance histogram or a lateral significance histogram, analyzing the patch discrimination significance histogram or the lateral significance histogram to identify a candidate region, classifying the multi-channel image by a depth RNN, and providing an output based on a classification result.
According to one embodiment of the invention, a method of training a depth RNN to detect a representation of a lesion from a multi-channel image computed from a sequence of PET images comprises:
a training image of a patient is received. The training images may be stored, for example, as training images in a PET image repository or by a medical record server. Each set of training images comprises a sequence of PET images.
The training images are pre-processed, for example by moving the region of interest, adding multiple rotations or multiple flip variables.
A subset of the images of each sequence including the lesion is manually drawn to define the boundary of the lesion. The training images may be stored in an electronic medical record along with manual drawing. Optionally, the plurality of images without lesions are manually or automatically annotated to include normal tissue. Each annotated lesion is associated with an indicator. The indicator is stored as a tag, metadata, field value in an electronic medical record according to a drawn color or other representation. Images of areas of brain symmetry where there are no lesions or annotations may be associated with indicators, such as normal patches.
Calculating a patch difference significance histogram for a plurality of images of each sequence, calculating a lateral significance histogram for a plurality of images of each sequence, creating a multi-channel image for each sequence, and training a depth RNN according to the multi-channel image and the associated label. Wherein the deep RNN is trained according to a random gradient descent.
Finally, the depth RNN is provided for classification of the target sequence of the PET image.
In conclusion, the invention provides the brain PET image recognition method based on artificial intelligence, which is beneficial to summarizing the rules of the brain nodule imaging characteristics by automatically learning and extracting the characteristic information, achieves higher detection rate, obtains a more accurate three-dimensional model by segmentation, and is beneficial to brain nodule lesion recognition and accurate diagnosis of doctors.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing system, centralized on a single computing system, or distributed across a network of computing systems, and optionally implemented in program code that is executable by the computing system, such that the program code is stored in a storage system and executed by the computing system. Thus, the present invention is not limited to any specific combination of hardware and software.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (1)

1. A brain PET image identification method based on artificial intelligence is characterized by comprising the following steps:
receiving a sequence of PET images of a brain of a target patient;
identifying a plurality of images from the sequence of PET images, the plurality of images including a base image, a peak intensity enhanced image, an initial captured image, and a delayed response image; the base image represents a sequence of non-gray PET images, the peak gray-scale enhanced image represents an image with the highest relative brightness value, the initial captured image represents an initially detected gray-scale image in the sequence of PET images, and the delayed response image represents an end portion of the sequence of PET images, i.e., the last image over a predefined time;
creating a multi-channel image of a PET image sequence, wherein the multi-channel image comprises a brightness channel, a gray level updating channel and a gray level clearing channel, the brightness channel comprises the peak gray level enhanced image, the gray level updating channel is an operation difference value between the peak gray level enhanced image and a basic image, and the gray level clearing channel is an operation difference value between the initial shooting image and the delayed response image;
calculating a score image by assigning a score to each pixel according to significant values above a threshold within a region around the pixel, and applying non-maximum suppression to the score image to obtain a binary detection mask comprising candidate regions representing local maximum locations;
the method further comprises the following steps: cropping the candidate regions from the score image and resizing each cropped candidate region according to an input of a depth RNN that calculates a classification representing a lesion of each candidate region;
performing wavelet transformation on the denoised PET brain image to obtain high-frequency information in the PET image; dividing the PET brain image into a plurality of regions through lifting tree decomposition, and respectively processing each local region; if each local region is phi × phi, the sampling angle is phi21, i.e. projection angle u pi/phi2-1, wherein u is 1, 2, …, L2-1;
Constructing a phi multiplied by phi window with the same size as the local area, and calculating an orthogonal projection eta of the local area on a sampling angleu(i):
ηu(i)=- x(i) * (sinu) +y(i) * (cosu)
u is the projection angle, x (i), y (i) are the window coordinates, and the bending coefficient eta is obtained by projection for each angled
Calculating a gradient vector field indicating the direction of change of the local region at each point in the PET image; to etadPerforming wavelet transform to obtain transform coefficient { epsilonkH, predetermining a threshold value T, and aligning epsilonkCarrying out thresholding:
εk (x)=0 |x|≤T
εk (x)= εk(x) |x|>T
the thresholding is followed by an inverse wavelet transform to obtain ηdIs approximated by a signal RdFor all projection angles u, η can be madedAnd RdThe angle with the minimum difference is used as the optimal gradient vector field direction of the local area;
u’=argmin||ηd-Rd||2,ζ<H,u∈[0,Φ2];
ζ=min||ηd-Rd||2
h is a threshold value for determining whether a gradient vector field exists in the local region.
CN202111065379.0A 2021-06-07 2021-06-07 Brain PET image identification method based on artificial intelligence Active CN113780421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111065379.0A CN113780421B (en) 2021-06-07 2021-06-07 Brain PET image identification method based on artificial intelligence

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110630596.3A CN113077021A (en) 2021-06-07 2021-06-07 Machine learning-based electronic medical record multidimensional mining method
CN202111065379.0A CN113780421B (en) 2021-06-07 2021-06-07 Brain PET image identification method based on artificial intelligence

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110630596.3A Division CN113077021A (en) 2021-06-07 2021-06-07 Machine learning-based electronic medical record multidimensional mining method

Publications (2)

Publication Number Publication Date
CN113780421A CN113780421A (en) 2021-12-10
CN113780421B true CN113780421B (en) 2022-06-07

Family

ID=76617154

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111065379.0A Active CN113780421B (en) 2021-06-07 2021-06-07 Brain PET image identification method based on artificial intelligence
CN202110630596.3A Pending CN113077021A (en) 2021-06-07 2021-06-07 Machine learning-based electronic medical record multidimensional mining method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110630596.3A Pending CN113077021A (en) 2021-06-07 2021-06-07 Machine learning-based electronic medical record multidimensional mining method

Country Status (1)

Country Link
CN (2) CN113780421B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114334130B (en) * 2021-12-25 2023-08-22 浙江大学 Brain symmetry-based PET molecular image computer-aided diagnosis system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108364006A (en) * 2018-01-17 2018-08-03 超凡影像科技股份有限公司 Medical Images Classification device and its construction method based on multi-mode deep learning
CN109447963A (en) * 2018-10-22 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of brain phantom identification
CN110580693A (en) * 2018-06-07 2019-12-17 湖南爱威医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
WO2020034469A1 (en) * 2018-08-13 2020-02-20 Beijing Ande Yizhi Technology Co., Ltd. Method and apparatus for classifying a brain anomaly based on a 3d mri image

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7317821B2 (en) * 2004-11-22 2008-01-08 Carestream Health, Inc. Automatic abnormal tissue detection in MRI images
US20070279716A1 (en) * 2006-06-02 2007-12-06 Chunghwa Picture Tubes, Ltd Process method of image data for liquid crystal display
CN106778506A (en) * 2016-11-24 2017-05-31 重庆邮电大学 A kind of expression recognition method for merging depth image and multi-channel feature
WO2019010470A1 (en) * 2017-07-07 2019-01-10 University Of Louisville Research Foundation, Inc. Segmentation of medical images
WO2019041262A1 (en) * 2017-08-31 2019-03-07 Shenzhen United Imaging Healthcare Co., Ltd. System and method for image segmentation
CN108428225A (en) * 2018-01-30 2018-08-21 李家菊 Image department brain image fusion identification method based on multiple dimensioned multiple features
CN108234884B (en) * 2018-02-12 2019-12-10 西安电子科技大学 camera automatic focusing method based on visual saliency
CN111227821B (en) * 2018-11-28 2022-02-11 苏州润迈德医疗科技有限公司 Microcirculation resistance index calculation method based on myocardial blood flow and CT (computed tomography) images
CN111079596A (en) * 2019-12-05 2020-04-28 国家海洋环境监测中心 System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN111489330B (en) * 2020-03-24 2021-06-22 中国科学院大学 Weak and small target detection method based on multi-source information fusion
CN112434172A (en) * 2020-10-29 2021-03-02 西安交通大学 Pathological image prognosis feature weight calculation method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506761A (en) * 2017-08-30 2017-12-22 山东大学 Brain image dividing method and system based on notable inquiry learning convolutional neural networks
CN108364006A (en) * 2018-01-17 2018-08-03 超凡影像科技股份有限公司 Medical Images Classification device and its construction method based on multi-mode deep learning
CN110580693A (en) * 2018-06-07 2019-12-17 湖南爱威医疗科技有限公司 Image processing method, image processing device, computer equipment and storage medium
WO2020034469A1 (en) * 2018-08-13 2020-02-20 Beijing Ande Yizhi Technology Co., Ltd. Method and apparatus for classifying a brain anomaly based on a 3d mri image
CN109447963A (en) * 2018-10-22 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of brain phantom identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Clinical Assessment Of MR-Assisted PET Image Reconstruction Algorithms for Low-Dose Brain PET Imaging;Abolfazl Mehranian et al;《2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)》;20200409;1-3 *
基于卷积神经网络的PET_CT多模态图像识别研究;王媛媛等;《视频应用与工程》;20170308;第41卷(第3期);88-94 *

Also Published As

Publication number Publication date
CN113780421A (en) 2021-12-10
CN113077021A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN109461495B (en) Medical image recognition method, model training method and server
US11593943B2 (en) RECIST assessment of tumour progression
US9092691B1 (en) System for computing quantitative biomarkers of texture features in tomographic images
CN113034426B (en) Ultrasonic image focus description method, device, computer equipment and storage medium
JP4999163B2 (en) Image processing method, apparatus, and program
CN109635846B (en) Multi-type medical image judging method and system
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN109124662B (en) Rib center line detection device and method
CN109753997B (en) Automatic accurate robust segmentation method for liver tumor in CT image
US7480401B2 (en) Method for local surface smoothing with application to chest wall nodule segmentation in lung CT data
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN110766659A (en) Medical image recognition method, apparatus, device and medium
JP2006006359A (en) Image generator, image generator method, and its program
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
US20230005140A1 (en) Automated detection of tumors based on image processing
Liu et al. Extracting lungs from CT images via deep convolutional neural network based segmentation and two-pass contour refinement
CN113780421B (en) Brain PET image identification method based on artificial intelligence
Lee et al. Hybrid airway segmentation using multi-scale tubular structure filters and texture analysis on 3D chest CT scans
Afshar et al. Lung tumor area recognition in CT images based on Gustafson-Kessel clustering
CN116229189B (en) Image processing method, device, equipment and storage medium based on fluorescence endoscope
Abdellatif et al. K2. Automatic pectoral muscle boundary detection in mammograms using eigenvectors segmentation
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
CN110599518B (en) Target tracking method based on visual saliency and super-pixel segmentation and condition number blocking
Liu et al. Automatic Lung Parenchyma Segmentation of CT Images Based on Matrix Grey Incidence.
CN117274216B (en) Ultrasonic carotid plaque detection method and system based on level set segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Brain PET Image Recognition Method Based on Artificial Intelligence

Effective date of registration: 20230407

Granted publication date: 20220607

Pledgee: Bank of China Limited by Share Ltd. Guangzhou Haizhu branch

Pledgor: GUANGZHOU TIANPENG COMPUTER TECHNOLOGY CO.,LTD.

Registration number: Y2023980037535