CN110610527B - SUV computing method, device, equipment, system and computer storage medium - Google Patents

SUV computing method, device, equipment, system and computer storage medium Download PDF

Info

Publication number
CN110610527B
CN110610527B CN201910751358.0A CN201910751358A CN110610527B CN 110610527 B CN110610527 B CN 110610527B CN 201910751358 A CN201910751358 A CN 201910751358A CN 110610527 B CN110610527 B CN 110610527B
Authority
CN
China
Prior art keywords
pet
image
noise
images
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910751358.0A
Other languages
Chinese (zh)
Other versions
CN110610527A (en
Inventor
张贺晔
张国庆
吕旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Raycan Technology Co Ltd
Original Assignee
Raycan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raycan Technology Co Ltd filed Critical Raycan Technology Co Ltd
Priority to CN201910751358.0A priority Critical patent/CN110610527B/en
Publication of CN110610527A publication Critical patent/CN110610527A/en
Application granted granted Critical
Publication of CN110610527B publication Critical patent/CN110610527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography

Abstract

The embodiment of the application discloses a method, a device, equipment, a system and a computer storage medium for calculating SUV. The method comprises the following steps: generating a plurality of PET noise images corresponding to the acquired PET original images based on the target detection model; determining the similarity between each PET noise image and the PET original image, and selecting a PET noise image with the similarity meeting preset requirements from a plurality of PET noise images as a specific PET noise image; the SUV is calculated from the particular PET noise image selected. By utilizing the technical scheme provided by the embodiment of the application, the accuracy of the obtained SUV can be improved, so that the accuracy of a tumor diagnosis result can be improved, and the treatment cost of a patient can be reduced.

Description

SUV computing method, device, equipment, system and computer storage medium
Technical Field
The present application relates to the field of medical image data processing technology, and in particular, to a method, apparatus, device, system, and computer storage medium for calculating a Standard Uptake Value (SUV).
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Positron emission tomography (Positron Emission Tomography, PET for short) is a technique for clinical imaging using radioactive elements, which uses vital elements such as 18 F、 11 C、 15 O、 18 N and other positron nuclides marked radiopharmaceuticals, and the physiological and biochemical changes of the substances which are changed along with time after entering the human body are observed in a non-invasive, quantitative and dynamic way from outside the human body. These radiopharmaceuticals release signals in the human body which are received by external PET devices, which in turn form PET images that can show chemical reactions of organs and tissues (e.g., tumors), indicating how well metabolism is normal at a site.
One of the clinical advantages of PET technology is that it can quantitatively divide imagesAnalysis, a commonly used semi-quantitative indicator, is the standard uptake value (Standarized Uptake Value, SUV), which describes 18 Uptake of F-FDG in tumor tissue and normal tissue. The higher SUV indicates a greater likelihood of malignancy, which is of great importance in diagnosing various diseases, particularly benign and malignant nodular lesions of the lungs. How to obtain accurate SUVs efficiently is a currently important research project.
In carrying out the present application, the inventors have found that the following problems exist in the prior art:
although PET technology has many advantages in diagnosing disease, in clinic, PET images have not been as good quality as computed tomography (Computed Tomography, CT) images and magnetic resonance images (Magnetic Resonance Imaging, MRI), which results in calculated SUVs deviating significantly from true SUVs, resulting in lower accuracy of tumor diagnosis results.
Disclosure of Invention
It is an aim of embodiments of the present application to provide a method, apparatus, device, system and computer storage medium for calculating a Standard Uptake Value (SUV) to solve at least one problem in the prior art.
In order to solve the above technical problems, an embodiment of the present application provides a method for SUV, which may include the following steps:
generating a plurality of PET noise images corresponding to the acquired PET original images based on the target detection model;
determining the similarity between each PET noise image and the PET original image, and selecting a PET noise image with the similarity meeting preset requirements from a plurality of PET noise images as a specific PET noise image;
the SUV is calculated from the selected particular PET noise image.
Optionally, the target detection model is obtained by training a preset machine learning model by using a PET sample image, and the preset machine learning model comprises a single-shot multi-frame detection model.
Optionally, before training the preset machine learning model with PET sample images to obtain the target detection model, the method further comprises:
and performing image enhancement processing on the PET sample image.
Optionally, the step of generating a plurality of said PET noise images comprises:
performing feature extraction on the PET original image through a plurality of different convolution layers in a basic convolution network of the target detection model to obtain a feature map;
carrying out convolution processing on the characteristic mapping image by utilizing a plurality of convolution layers added in the target detection model to obtain a plurality of PET reconstructed images;
and respectively carrying out differential processing on the plurality of PET reconstructed images and the PET original image to generate a plurality of PET noise images.
Optionally, the step of determining the similarity between each of the PET noise images and the PET raw image comprises:
calculating at least one of an SSIM value, an MSE value and a PSNR value between each of the PET noise images and the PET original image;
and determining the similarity between each PET noise image and the PET original image according to the calculation result.
Optionally, the specific PET noise image is a PET noise image with the lowest similarity with the PET original image in the plurality of PET noise images.
The embodiment of the application also provides a device for calculating SUV, which can comprise:
a generation unit configured to generate a plurality of PET noise images corresponding to the acquired PET original images based on the target detection model;
a determination and selection unit configured to determine a degree of similarity between each of the PET noise images and the PET original image, and select, as a specific PET noise image, a PET noise image whose degree of similarity with the PET original image satisfies a preset requirement from among a plurality of the PET noise images;
a processing unit configured to calculate the SUV from the selected specific PET noise image.
The embodiment of the application also provides computer equipment, which can comprise a memory and a processor, wherein the memory stores a computer program, and the computer program is executed by the processor to enable the processor to execute the steps in the method.
The embodiment of the application also provides an image processing system which comprises the computer equipment and the PET equipment, wherein the PET equipment is configured to obtain a PET original image by scanning a target patient.
The embodiment of the application also provides a computer storage medium, which can store a computer program, and the computer program can realize the functions corresponding to the method when being executed.
As can be seen from the technical solution provided by the above embodiment of the present application, in the embodiment of the present application, by generating a plurality of PET noise images corresponding to the acquired PET original image based on the target detection model, and selecting a PET noise image having a similarity with the PET original image that meets a preset requirement from the plurality of PET noise images, the quality of the selected PET noise image is higher, so that the SUV calculated by using the selected PET noise image is more accurate, and thus the accuracy of the tumor diagnosis result can be improved. Moreover, the SUV calculation method provided by the embodiment of the application is suitable for various tumor diagnosis researches, and has a wider application range. In addition, the method does not depend on any other anatomical images, and the calculation of SUV can be realized by using the PET image only, so that the treatment cost of a patient can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is an application environment diagram of a method of computing an SUV in one embodiment;
FIG. 2 is a flow chart of a method of computing an SUV in one embodiment;
FIG. 3 is a schematic diagram of the structure of an SSD model;
FIG. 4 is a schematic structural view of an apparatus for computing an SUV in one embodiment;
FIG. 5 is a schematic diagram of a structure of a computer device in one embodiment;
FIG. 6 is a schematic diagram of a computer device in another embodiment;
fig. 7 is a schematic diagram of the structure of an image processing system in one embodiment.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present application, and it is apparent that the described embodiments are only for explaining a part of the embodiments of the present application, not all the embodiments, and are not intended to limit the scope of the present application or the claims. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, shall fall within the scope of the application.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected/coupled" to another element, it can be directly connected/coupled to the other element or intervening elements may also be present. The term "connected/coupled" as used herein may include electrically and/or mechanically physical connections/couplings. The term "comprising" as used herein refers to the presence of a feature, step or element, but does not exclude the presence or addition of one or more other features, steps or elements. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In addition, in the description of the present application, the terms "first," "second," etc. are used merely for descriptive purposes and to distinguish between similar organisms, and there is no order of precedence between the two, nor should it be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
FIG. 1 is an application environment diagram of a method of computing an SUV in one embodiment. Referring to fig. 1, the method may be applied to a computer device. The computer device includes a terminal 100 and a server 200 connected through a network. The method may be performed in the terminal 100 or the server 200, for example, the terminal 100 may directly acquire a PET original image from a PET device and perform the above method on the terminal side; alternatively, the terminal 100 may transmit the PET original image to the server 200 after acquiring the PET original image, so that the server 200 acquires the PET original image and performs the above-described method. The terminal 100 may be a desktop terminal (e.g., a desktop computer) or a mobile terminal (e.g., a notebook computer), in particular. The server 200 may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
As shown in FIG. 2, in one embodiment, the present application provides a method of computing an SUV. The method specifically comprises the following steps:
s1: a plurality of PET noise images corresponding to the acquired PET raw images are generated based on the target detection model.
The PET raw image may refer to an image obtained by scanning a target patient injected with a tracer labeled with a positron nuclide using a PET apparatus. A PET noise image may refer to a PET image that contains noise (e.g., poisson noise), which may be due to the PET equipment itself, acquisition conditions, environmental factors, and the like.
The target detection model may be a training of a preset machine learning model using a large number of PET sample images. The preset machine learning model may include a convolutional neural network model extracted based on a candidate Region, for example, fast R-CNN (Region-Conventional Nerual Network) or Fast R-CNN model, or an end-to-end convolutional neural network model, for example, YOLO (You Only Look Once) or SSD (single shot multiple frame detection: single Shot Multibox Detector) model, but is not limited thereto. Because the SSD model can express the detection task as an end-to-end regression problem and has the advantages of fast processing speed, high detection precision and capability of realizing real-time processing, the SSD model is preferably adopted as a preset machine learning model in the embodiment of the specification.
As shown in fig. 3, the SSD model is a deep learning model applied to object detection, and is improved based on the VGG16 image classification model structure, two fully connected layers (FC 6 and FC 7) of VGG16 are converted into convolution layers, a basic convolution network of SSD is obtained, and then a multi-scale feature mapping layer consisting of 3 convolution layers and 1 average pooling layer is added after the basic convolution network. The sizes of the added 3 convolutional layers and 1 average pooling layer are gradually reduced to ensure that multi-scale prediction is enabled. Moreover, each added convolution layer may produce a series of predicted detection results by convolving a series of filters. The basic convolution network and the multi-scale feature mapping layer form an SSD model.
After determining the preset machine learning model, training the preset machine learning model based on the measured PET sample image, and adjusting network parameters of the preset machine learning model in the training process until the result output by the preset machine learning model converges, so that the machine learning model corresponding to the network parameters in the convergence can be determined as the target detection model. Specifically, a plurality of small image blocks may be extracted from a PET sample image containing noise and used as input to a preset machine learning model. Accordingly, a corresponding number of small image blocks are also extracted from the noise-free PET sample image at the same position, and these small image blocks are used as labels of a preset machine learning model. Then, the small image blocks including noise and not including noise can be used for training the preset machine learning model until the constructed loss function reaches the minimum, and the machine learning model corresponding to the network parameter when the constructed loss function is minimum is determined as the target detection model. Reference may be made to the prior art for specific forms and solving processes of the loss function, and are not described in detail herein.
In order to improve stability of the target detection model obtained by training and ensure reliability of a detection result, before training a preset machine learning model by using a PET sample image to obtain the target detection model, image enhancement processing may be performed on the PET sample image, for example, image clipping, flipping, rotation, scaling, adding random noise, image blurring, RGB color disturbance, and the like, and then the PET sample image after the image enhancement processing is input into the preset machine learning model to perform model training, so as to obtain the target detection model.
After the target detection model is obtained and the PET raw image of the target patient is acquired, the target detection model may be invoked to generate a plurality of PET noise images corresponding to the PET raw image. Specifically, after the PET raw image is input to the target detection model, feature extraction may be performed on the PET raw image through a plurality of (e.g., 5) different convolution layers in the underlying convolution network of the target detection model to obtain a feature map. These feature maps may then be separately convolved with multiple (e.g., 2) convolution layers (e.g., which may include a convolution kernel of 3*3) added to the object detection model to obtain multiple PET reconstructed images. For example, for a low-resolution feature map, sparse feature extraction can be performed on the feature map by using standard convolution, so that a PET reconstructed image with low noise can be obtained; for the high-resolution feature map, the dense feature extraction can be performed on the feature map by utilizing convolution of an Astrocus algorithm, so that a PET reconstructed image with high noise can be obtained. The generated PET reconstructed image with lower noise and the PET reconstructed image with higher noise constitute a plurality of PET reconstructed images. The PET reconstructed image may refer to a PET image obtained by performing image reconstruction from the extracted image features. For specific processes of feature extraction and obtaining PET reconstructed images, reference may be made to the prior art, and details are not repeated here. After a plurality of PET reconstructed images are obtained, each PET reconstructed image may be separately subjected to a difference process with respect to the PET original image (i.e., the two images are subjected to a pixel value subtraction operation at corresponding positions), so that a plurality of PET noise images containing only noise or having a very high noise content may be obtained.
S2: and determining the similarity between each PET noise image and the PET original image, and selecting a PET noise image with the similarity meeting the preset requirement from a plurality of PET noise images as a specific PET noise image.
Similarity is an index for measuring the similarity of two images, and can be generally measured in terms of structural similarity (Structural Similarity Index, SSIM), mean-Square Error (MSE), and/or Peak Signal-to-Noise Ratio (PSNR).
After generating a plurality of PET noise images corresponding to the PET original images, a degree of similarity between each PET noise image and the PET original image may be determined, and a specific PET noise image may be selected according to the determination result of the degree of similarity. Specifically, at least one of SSIM value, MSE value, and PSNR value between each PET noise image and PET original image may be calculated; then the similarity between each PET noise image and the PET original image can be determined according to the calculation result; and finally, selecting the PET noise image with the similarity meeting the preset requirement from the PET noise images as a specific PET noise image.
It should be noted that the specific PET noise image may be any PET noise image having a similarity with the PET original image satisfying a preset requirement from among the plurality of PET noise images, and preferably may be a PET noise image having a lowest similarity with the PET original image, that is, a PET image containing the least noise. Ideally, a particular PET noise image may refer to a PET image that does not contain noise.
In one embodiment of the present application, SSIM values between PET noise images and PET raw images can be calculated by using the following formula (1):
wherein x and y represent a PET original image and a PET noise image respectively; mu (mu) x Sum mu y Respectively averaging x and y to reflect brightness information of the two images;and->The variances of x and y respectively reflect the contrast information of the two images; sigma (sigma) xy The covariance of x and y is used for reflecting the structural information of the two images; c 1 =(0.01L) 2 ,c 2 =(0.03L) 2 L is the dynamic range of the image pixel values. The range of SSIM values is typically 0-1, and the larger the SSIM value, the higher the similarity of the two images. When the two images are identical, the value of SSIM is 1.
After calculating the SSIM value between each PET noise image and the PET original image, all the obtained SSIM values may be compared, the similarity between all the PET noise images and the PET original image may be determined according to the comparison result, and after determining the similarity between all the PET noise images and the PET original image, the corresponding PET noise image may be selected according to the determination result of the similarity. For example, a PET noise image, of the plurality of PET noise images, for which the similarity with the PET original image belongs to the second smallest, may be selected as the specific PET noise image.
In one embodiment of the application, the MSE value between each PET noise image and the PET raw image may be calculated by using the following equation (3):
wherein, m is the pixel size, x (i, j) and y (i, j) are the pixel values of the ith row and the jth column in the PET original image and the PET noise image respectively, and m, n, i and j are positive integers. In general, the smaller the MSE value, the higher the similarity of the two images.
After the MSE value between each PET noise image and the PET original image is calculated, all the obtained MSE values can be compared, the similarity between all the PET noise images and the PET original image is determined according to the comparison result, and after the similarity between all the PET noise images and the PET original image is determined, the corresponding PET noise image can be selected according to the determination result.
In one embodiment of the present application, the PSNR value between each PET noise image and the PET raw image may be calculated by using the following formula (3):
where MAX represents the maximum value of the color of the image point. In general, the larger the PSNR value, the higher the similarity of the two images. In addition, PSNR values are generally above 30 dB.
After calculating the PSNR value between each PET noise image and the PET original image, comparing all obtained PSNR values, determining the similarity between all PET noise images and the PET original image according to the comparison result, and selecting the corresponding PET noise image according to the determination result after determining the similarity between all PET noise images and the PET original image.
In order to improve accuracy of determining the similarity between the PET noise image and the PET original image, the SSIM value between the PET noise image and the PET original image may be compared, the MSE value or the PSNR value between the PET noise image and the PET original image may be compared, or all three values may be respectively compared.
S3: SUVs are calculated from the selected specific PET noise images.
After a specific PET noise image is selected from the plurality of PET noise images, the SUV may be calculated from the selected specific PET noise image. Specifically, the average pixel count at the lesion in the patient can be obtained from a specific PET noise image, so that the count rate of the pixels can be obtained, then the radioactivity concentration of the tracer at the lesion (radioactivity concentration=count rate of the pixels×scaling factor) can be calculated according to the obtained count rate of the pixels, and finally the SUV at the lesion can be calculated according to the radioactivity concentration of the tracer at the lesion, and the calculation formula is as follows: SUV = radioactivity concentration of tracer at the focus/(injected radioactivity concentration/body weight).
Because the noise contained in the selected PET noise image is low, the calculated SUV value can be more accurate, and the accuracy of the tumor diagnosis result can be improved.
As can be seen from the above description, the embodiment of the present application generates a plurality of PET noise images corresponding to the acquired PET original image based on the target detection model, selects a PET noise image having a low similarity to the PET original image from the plurality of PET noise images, and calculates the SUV using the selected PET noise image, which can improve the accuracy of the SUV calculation result, thereby improving the accuracy of the tumor diagnosis result. Moreover, the SUV calculation method provided by the embodiment of the application is suitable for various tumor diagnosis researches, and has a wider application range. In addition, the method does not depend on any other anatomical images, and the calculation of SUV can be realized by using the PET image only, so that the treatment cost of a patient can be reduced.
As shown in fig. 4, an embodiment of the present application further provides an apparatus for calculating an SUV, which may include:
a generation unit 410 that may be configured to generate a plurality of PET noise images corresponding to the acquired PET raw images based on the target detection model;
the determination selecting unit 420 may be configured to determine a similarity between each PET noise image and the PET original image, and select, as a specific PET noise image, a PET noise image whose similarity with the PET original image satisfies a preset requirement from among the plurality of PET noise images;
a processing unit 430, which may be configured to calculate the SUV from the selected specific PET noise image.
In one embodiment of the present application, the determining selection unit 420 may include (not shown in the figure):
a computing subunit configured to compute at least one of SSIM values, MSE values, and PSNR values between each PET noise image and a PET original image;
a determination subunit configured to determine a similarity between each PET noise image and the PET original image according to the calculation result;
and the selecting subunit is configured to select a PET noise image with the similarity meeting the preset requirement from the PET original images from the plurality of PET noise images according to the determination result of the similarity as a specific PET noise image.
For a specific description of the above units, reference may be made to the descriptions of steps S1-S3 in the above method embodiments, and a detailed description is omitted here.
The device generates a plurality of PET noise images corresponding to the acquired PET original images based on the target detection model by using the generation unit, the determination selection unit and the processing unit, and calculates the SUV by using the PET noise image with lower noise in the plurality of PET noise images, which can make the calculation result more accurate, thus improving the accuracy of the tumor diagnosis result. Moreover, the device provided by the embodiment of the application can be suitable for various tumor diagnosis researches, and has a wider application range. In addition, the device does not depend on any other anatomical images, and the calculation of SUV can be realized by using the PET image only, so that the treatment cost of a patient can be reduced.
FIG. 5 illustrates a schematic diagram of a computer device in one embodiment. The computer device may in particular be the terminal 100 in fig. 1. As shown in fig. 5, the computer device includes a processor, a memory, a network interface, an input device, and a display connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, may cause the processor to perform the method of calculating an SUV described in the above embodiments. The internal memory may also have stored therein a computer program which, when executed by a processor, performs the method of calculating an SUV described in the above embodiments.
Fig. 6 shows a schematic structural diagram of a computer device in another embodiment. The computer device may be in particular the server 200 in fig. 1. As shown in fig. 6, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, may cause the processor to perform the method of calculating an SUV described in the above embodiments. The internal memory may also have stored therein a computer program which, when executed by a processor, performs the method of calculating an SUV described in the above embodiments.
It will be appreciated by those skilled in the art that the structures shown in fig. 5 and 6 are merely block diagrams of portions of structures associated with aspects of the present application and are not intended to limit the computer devices to which aspects of the present application may be applied, and that a particular computer device may include more or fewer components than those shown, or may combine certain components, or may have different configurations of components.
In one embodiment, as shown in FIG. 7, the present application also provides an image processing system that may include the computer device of FIG. 5 or FIG. 6 and a PET device connected thereto that may be used to obtain a PET raw image by scanning a target patient and to provide the obtained PET raw image to the computer device.
In one embodiment, the present application also provides a computer storage medium, where a computer program is stored, where the computer program is capable of implementing the corresponding functions described in the above method embodiments when executed. The computer program may also run on a computer device as shown in fig. 5 or fig. 6. The memory of the computer device contains the respective program modules constituting the apparatus, and the computer program constituted by the respective program modules is capable of realizing functions corresponding to the steps in the method of calculating an SUV according to the respective embodiments of the present application described in the present specification when executed.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage media, databases, or other media used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The apparatus, devices, units, etc. set forth in the above embodiments may be implemented in particular by semiconductor chips, computer chips and/or entities or by products having certain functions. For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of the units may be implemented in the same chip or chips when implementing the application.
Although the present application provides method operational steps as described in the above embodiments or flowcharts, more or fewer operational steps may be included in the method, either on a routine basis or without inventive labor. In the steps where there is logically no necessary causal relationship, the execution order of the steps is not limited to the execution order provided by the embodiment of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In addition, the technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The embodiments described above are described in order to facilitate the understanding and use of the present application by those of ordinary skill in the art. It will be apparent to those skilled in the art that various modifications can be made to these embodiments and that the general principles described herein may be applied to other embodiments without the need for inventive faculty. Therefore, the present application is not limited to the above-described embodiments, and those skilled in the art, based on the present disclosure, should make improvements and modifications without departing from the scope of the present application.

Claims (9)

1. A method of calculating a standard uptake value SUV, the method comprising:
generating a plurality of PET noise images corresponding to the acquired PET original images based on a target detection model, wherein the target detection model is obtained by training a preset machine learning model by utilizing PET sample images, and the preset machine learning model comprises a convolutional neural network model extracted based on a candidate region or an end-to-end convolutional neural network model;
determining the similarity between each PET noise image and the PET original image, and selecting a PET noise image with the similarity meeting preset requirements from a plurality of PET noise images as a specific PET noise image, wherein the specific PET noise image is the PET noise image with the lowest similarity with the PET original image in the plurality of PET noise images;
the SUV is calculated from the selected particular PET noise image.
2. The method of claim 1, wherein the pre-set machine learning model comprises a single-shot multi-frame detection model.
3. The method of claim 2, wherein prior to training the pre-set machine learning model with PET sample images to obtain the target detection model, the method further comprises:
and performing image enhancement processing on the PET sample image.
4. A method according to claim 2 or 3, wherein the step of generating a plurality of said PET noise images comprises:
performing feature extraction on the PET original image through a plurality of different convolution layers in a basic convolution network of the target detection model to obtain a feature map;
carrying out convolution processing on the characteristic mapping image by utilizing a plurality of convolution layers added in the target detection model to obtain a plurality of PET reconstructed images;
and respectively carrying out differential processing on the plurality of PET reconstructed images and the PET original image to generate a plurality of PET noise images.
5. The method of claim 1, wherein determining the similarity between each of the PET noise images and the PET raw image comprises:
calculating at least one of an SSIM value, an MSE value and a PSNR value between each of the PET noise images and the PET original image;
and determining the similarity between each PET noise image and the PET original image according to the calculation result.
6. An apparatus for calculating a standard uptake value SUV, the apparatus comprising:
a generation unit configured to generate a plurality of PET noise images corresponding to the acquired PET original images based on a target detection model obtained by training a preset machine learning model including a convolutional neural network model extracted based on a candidate region or an end-to-end convolutional neural network model using PET sample images;
a determination selecting unit configured to determine a similarity between each of the PET noise images and the PET original image, and select, from a plurality of the PET noise images, a PET noise image whose similarity with the PET original image satisfies a preset requirement as a specific PET noise image, the specific PET noise image being a PET noise image whose similarity with the PET original image is the lowest among the plurality of the PET noise images;
a processing unit configured to calculate the SUV from the selected specific PET noise image.
7. A computer device, characterized in that it comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method according to any of claims 1 to 5.
8. An image processing system comprising the computer device of claim 7 and a PET device, wherein the PET device is configured to obtain a PET raw image by scanning a target patient.
9. A computer storage medium storing a computer program which, when executed, is capable of carrying out the functions corresponding to the method of any one of claims 1 to 5.
CN201910751358.0A 2019-08-15 2019-08-15 SUV computing method, device, equipment, system and computer storage medium Active CN110610527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910751358.0A CN110610527B (en) 2019-08-15 2019-08-15 SUV computing method, device, equipment, system and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910751358.0A CN110610527B (en) 2019-08-15 2019-08-15 SUV computing method, device, equipment, system and computer storage medium

Publications (2)

Publication Number Publication Date
CN110610527A CN110610527A (en) 2019-12-24
CN110610527B true CN110610527B (en) 2023-09-22

Family

ID=68890581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910751358.0A Active CN110610527B (en) 2019-08-15 2019-08-15 SUV computing method, device, equipment, system and computer storage medium

Country Status (1)

Country Link
CN (1) CN110610527B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563896B (en) * 2020-07-20 2023-06-02 成都中轨轨道设备有限公司 Image processing method for detecting abnormality of overhead line system
CN112991276A (en) * 2021-02-25 2021-06-18 复旦大学附属中山医院 Contrast subtraction and quantitative analysis method based on different characteristics of multi-nuclide PET (positron emission tomography) image

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007026266A2 (en) * 2005-06-15 2007-03-08 Koninklijke Philips Electronics N.V. Noise model selection for emission tomography
CN101415122A (en) * 2007-10-15 2009-04-22 华为技术有限公司 Forecasting encoding/decoding method and apparatus between frames
CN105474166A (en) * 2013-03-15 2016-04-06 先进元素科技公司 Methods and systems for purposeful computing
WO2017085092A1 (en) * 2015-11-17 2017-05-26 Koninklijke Philips N.V. Data and scanner spec guided smart filtering for low dose and/or high resolution pet imaging
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
GB201718161D0 (en) * 2016-11-07 2017-12-20 Ford Global Tech Llc Constructing map data using laser scanned images
CN107844739A (en) * 2017-07-27 2018-03-27 电子科技大学 Robustness target tracking method based on adaptive rarefaction representation simultaneously
WO2018200493A1 (en) * 2017-04-25 2018-11-01 The Board Of Trustees Of The Leland Stanford Junior University Dose reduction for medical imaging using deep convolutional neural networks
CN108765294A (en) * 2018-06-11 2018-11-06 深圳市唯特视科技有限公司 A kind of image combining method generating confrontation network based on full convolutional network and condition
CN109285142A (en) * 2018-08-07 2019-01-29 广州智能装备研究院有限公司 A kind of head and neck neoplasm detection method, device and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10475214B2 (en) * 2017-04-05 2019-11-12 General Electric Company Tomographic reconstruction based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007026266A2 (en) * 2005-06-15 2007-03-08 Koninklijke Philips Electronics N.V. Noise model selection for emission tomography
CN101415122A (en) * 2007-10-15 2009-04-22 华为技术有限公司 Forecasting encoding/decoding method and apparatus between frames
CN105474166A (en) * 2013-03-15 2016-04-06 先进元素科技公司 Methods and systems for purposeful computing
WO2017085092A1 (en) * 2015-11-17 2017-05-26 Koninklijke Philips N.V. Data and scanner spec guided smart filtering for low dose and/or high resolution pet imaging
GB201718161D0 (en) * 2016-11-07 2017-12-20 Ford Global Tech Llc Constructing map data using laser scanned images
WO2018200493A1 (en) * 2017-04-25 2018-11-01 The Board Of Trustees Of The Leland Stanford Junior University Dose reduction for medical imaging using deep convolutional neural networks
CN107844739A (en) * 2017-07-27 2018-03-27 电子科技大学 Robustness target tracking method based on adaptive rarefaction representation simultaneously
CN107403201A (en) * 2017-08-11 2017-11-28 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method
CN108765294A (en) * 2018-06-11 2018-11-06 深圳市唯特视科技有限公司 A kind of image combining method generating confrontation network based on full convolutional network and condition
CN109285142A (en) * 2018-08-07 2019-01-29 广州智能装备研究院有限公司 A kind of head and neck neoplasm detection method, device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Impact of PET reconstruction algorithm and threshold on dose painting of non-small cell lung cancer;Ingerid Skjei Knudtsen 等;《Radiotherapy and Oncology》;20141231;第210-214页 *
不同滤波方法对脑部PET图像质量和SUV值的影响;徐磊 等;《中国医疗设备》;20190410;第87-90页 *

Also Published As

Publication number Publication date
CN110610527A (en) 2019-12-24

Similar Documents

Publication Publication Date Title
JP7179757B2 (en) Dose Reduction for Medical Imaging Using Deep Convolutional Neural Networks
Fahmy et al. Automated analysis of cardiovascular magnetic resonance myocardial native T1 mapping images using fully convolutional neural networks
US11593978B2 (en) System and method for forming a super-resolution biomarker map image
US11399779B2 (en) System-independent quantitative perfusion imaging
CN111080584B (en) Quality control method for medical image, computer device and readable storage medium
Sander et al. Automatic segmentation with detection of local segmentation failures in cardiac MRI
RU2667879C1 (en) Processing and analysis of data on computer-assisted tomography images
Jun Guo et al. Automated left ventricular myocardium segmentation using 3D deeply supervised attention U‐net for coronary computed tomography angiography; CT myocardium segmentation
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
Hammouda et al. A new framework for performing cardiac strain analysis from cine MRI imaging in mice
Farrag et al. Evaluation of fully automated myocardial segmentation techniques in native and contrast‐enhanced T1‐mapping cardiovascular magnetic resonance images using fully convolutional neural networks
CN110610527B (en) SUV computing method, device, equipment, system and computer storage medium
Mehrabian et al. Deformable registration for longitudinal breast MRI screening
WO2021011775A1 (en) Systems and methods for generating classifying and quantitative analysis reports of aneurysms from medical image data
Gheorghiță et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data
CN110992312B (en) Medical image processing method, medical image processing device, storage medium and computer equipment
Sander et al. Reconstruction and completion of high-resolution 3D cardiac shapes using anisotropic CMRI segmentations and continuous implicit neural representations
Marinelli et al. Registration of myocardial PET and SPECT for viability assessment using mutual information
CN113393427B (en) Plaque analysis method, plaque analysis device, computer equipment and storage medium
CN111971751A (en) System and method for evaluating dynamic data
CN114723723A (en) Medical image processing method, computer device and storage medium
CN110738664B (en) Image positioning method and device, computer equipment and storage medium
US11798205B2 (en) Image reconstruction employing tailored edge preserving regularization
Kläser et al. Uncertainty-aware multi-resolution whole-body MR to CT synthesis
Huang et al. Synthetic‐to‐real domain adaptation with deep learning for fitting the intravoxel incoherent motion model of diffusion‐weighted imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant