CN112991228A - Method and system for correcting crosstalk - Google Patents

Method and system for correcting crosstalk Download PDF

Info

Publication number
CN112991228A
CN112991228A CN202110414435.0A CN202110414435A CN112991228A CN 112991228 A CN112991228 A CN 112991228A CN 202110414435 A CN202110414435 A CN 202110414435A CN 112991228 A CN112991228 A CN 112991228A
Authority
CN
China
Prior art keywords
crosstalk
projection data
corrected
convolution kernel
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110414435.0A
Other languages
Chinese (zh)
Other versions
CN112991228B (en
Inventor
刘炎炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202110414435.0A priority Critical patent/CN112991228B/en
Publication of CN112991228A publication Critical patent/CN112991228A/en
Priority to PCT/CN2022/087408 priority patent/WO2022218438A1/en
Application granted granted Critical
Publication of CN112991228B publication Critical patent/CN112991228B/en
Priority to US18/488,012 priority patent/US20240070918A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification provides a method and a system for correcting crosstalk. The method comprises the steps of obtaining first projection data and second projection data, wherein the first projection data comprise crosstalk of equipment to be corrected, and the second projection data are projection data of which the crosstalk of the equipment to be corrected is corrected; training an initial auxiliary model by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer; determining a target convolution kernel corresponding to at least one convolution layer based on the auxiliary model; and determining crosstalk information of the equipment to be corrected based on the target convolution kernel, wherein the crosstalk information of the equipment to be corrected is used for correcting crosstalk of the equipment to be corrected.

Description

Method and system for correcting crosstalk
Technical Field
The present application relates to the field of scanning devices and computer technologies, and in particular, to a method and a system for correcting crosstalk.
Background
In a radiation scanning device, such as an X-ray scanning device, a CT device (computed tomography device), a PET-CT device (positron emission computed tomography device), a laser scanning device, etc., there may be a certain crosstalk between pixels of a detector, that is, radiation photons that should be originally received by a certain detection unit may diffuse to an adjacent detection unit. Projection data obtained by scanning of a device with crosstalk has deviation, so that the contrast of an image at a tissue boundary is reduced, even an artifact is generated, and the imaging quality of the image is influenced.
Therefore, a method and system for correcting crosstalk is needed.
Disclosure of Invention
One aspect of the present description provides a method of correcting crosstalk. The method comprises the following steps: acquiring first projection data and second projection data, wherein the first projection data are projection data including crosstalk of equipment to be corrected, and the second projection data are projection data generated after the crosstalk of the equipment to be corrected is corrected; training an initial auxiliary model by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer; determining a target convolution kernel corresponding to the at least one convolution layer based on the auxiliary model; and determining crosstalk information of the equipment to be corrected based on the target convolution kernel, wherein the crosstalk information of the equipment to be corrected is used for correcting crosstalk of the equipment to be corrected.
Another aspect of the present description provides a system for correcting crosstalk. The system comprises: an acquisition module: the device comprises a processing unit, a processing unit and a processing unit, wherein the processing unit is used for acquiring first projection data and second projection data, the first projection data are projection data including crosstalk of equipment to be corrected, and the second projection data are projection data generated after the crosstalk of the equipment to be corrected is corrected; a model determination module: the initial auxiliary model is trained by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, and the initial auxiliary model comprises at least one convolutional layer; a convolution kernel determination module; a target convolution kernel for determining the at least one convolution layer corresponding to the auxiliary model; a crosstalk determination module: the device to be corrected is used for determining crosstalk information of the device to be corrected based on the target convolution kernel, and the crosstalk information of the device to be corrected is used for correcting crosstalk of the device to be corrected.
Another aspect of the specification provides an apparatus for correcting crosstalk, including a processor configured to perform a method of correcting crosstalk.
Another aspect of the present specification provides a computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform a method of correcting crosstalk.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a system for correcting crosstalk according to some embodiments of the present description;
FIG. 2 is a block diagram of an exemplary system for correcting crosstalk shown in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow chart of a method of correcting crosstalk according to some embodiments of the present description;
FIG. 4 is an exemplary flow diagram of a method for determining crosstalk information for a device to be corrected based on a target convolution kernel in accordance with some embodiments of the present description;
FIG. 5 is a schematic diagram of a detection cell pixel matrix and corresponding target convolution kernel, according to some embodiments of the present description;
FIG. 6 is a schematic diagram of a structure of an auxiliary model according to some embodiments of the present description;
FIG. 7 is a schematic illustration of images before and after correction of crosstalk for a device according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this specification is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of a system for correcting crosstalk according to some embodiments of the present description.
The crosstalk correction system can be used for crosstalk correction of various ray scanning devices, such as crosstalk correction of CT equipment, crosstalk correction of PET-CT equipment and the like.
As shown in fig. 1, an application scenario 100 of the crosstalk correction system may include a first computing system 130 and a second computing system 120. The first computing system 130 and the second computing system 120 may be the same or different. The first computing system 130 and the second computing system 120 refer to systems with computing capability, and may include various computers, such as a server and a personal computer, or may be computing platforms formed by connecting a plurality of computers in various structures.
The first computing system 130 and the second computing system 120 may include processors therein that may execute program instructions. Processors may include various common general purpose Central Processing Units (CPUs), Graphics Processing Units (GPUs), Microprocessors (MPUs), Application-Specific Integrated circuits (ASICs), or other types of Integrated circuits.
The first computing system 130 and the second computing system 120 may include storage media that may store instructions and may also store data. The storage medium may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof.
The first computing system 130 and the second computing system 120 may also include a network for internal connection and connection with the outside, and may also include terminals for input or output. The network may be any one or more of a wired network or a wireless network. The terminal may include various devices having information receiving and/or transmitting functions, such as a computer, a mobile terminal (e.g., a mobile phone), a text scanning device, a display device, a printer, a wearable device (e.g., smart glasses, smart headphones), and the like.
The second computing system 120 may obtain the training samples 110. The training samples 110 may include first projection data and second projection data. Wherein the first projection data may comprise crosstalk of the device to be corrected. The first projection data may be data obtained based on a scan of a reference object by a device to be corrected. The second projection data may be projection data generated after correcting crosstalk of the device to be corrected. The second projection data may be data obtained by scanning the reference object based on a standard apparatus. The standard device refers to a corrected crosstalk-cancelled device. The reference object refers to an object used as a sample for reference, and the reference object may be a phantom, an animal, a human body, or the like. The training samples 110 may be input to the second computing system 120 in a variety of common ways (e.g., input via an input device, transmission via a network from a storage device, etc.).
The second computing system 120 may obtain an initial aiding model. In some embodiments, the initial aiding model may be a machine learning model, such as a neural network model. The second projection data may be used as a golden standard for the first projection data to train the initial aiding model. And obtaining the auxiliary model after the initial auxiliary model is trained. The second computing system 120 may determine a target convolution kernel based on the auxiliary model and determine crosstalk information 125 for the device to be corrected based on the target convolution kernel. For a detailed description of the process, reference may be made to the descriptions of fig. 3 and fig. 4, which are not described herein again.
The first computing system 130 may obtain the projection data 140 of the device to be corrected and the crosstalk information 125 of the device to be corrected. The projection data 140 of the device to be corrected and the crosstalk information 125 of the device to be corrected may be input to the first computing system 130 in various common ways (e.g., input via an input device, transmission via a network via a storage device, etc.).
By correcting the projection data 140 of the device to be corrected based on the crosstalk information 125 of the device to be corrected, the first computing system 130 can obtain the crosstalk-corrected projection data 150. The crosstalk coefficient can be obtained through the target convolution kernel obtained through learning of a small amount of training sample data, crosstalk correction of the equipment to be corrected is achieved based on the crosstalk information determined through the target convolution kernel, a large amount of training samples are not needed for supporting, the practicability is high, and equipment crosstalk can be corrected more conveniently.
Fig. 2 is a block diagram of an exemplary system for correcting crosstalk, shown in accordance with some embodiments of the present description.
As shown in fig. 2, in some embodiments, the system 200 for correcting crosstalk may include an acquisition module 210, a model determination module 220, a convolution kernel determination module 230, a crosstalk determination module 240, and a model correction module 250.
The obtaining module 210 may be configured to: the method comprises the steps of obtaining first projection data and second projection data, wherein the first projection data are projection data including crosstalk of equipment to be corrected, and the second projection data are projection data generated after the crosstalk of the equipment to be corrected is corrected. For a detailed description of the obtaining module 210, reference may be made to step 310, which is not described herein again.
The model determination module 220 may be configured to: and training an initial auxiliary model by taking the first projection data and the second projection data as a training sample pair to obtain the auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer.
In some embodiments, the model determination module 220 may be further operable to: iteratively updating the initial auxiliary model according to the training sample pair and the loss function to obtain the auxiliary model; wherein the loss function includes a first loss function determined according to a difference of a sum of elements of an intermediate convolution kernel determined based on a parameter of the initial auxiliary model or the updated model from a preset value. For a detailed description of the model determining module 220, reference may be made to step 320, which is not described herein.
The convolution kernel determination module 230 may be configured to: determining a target convolution kernel corresponding to the at least one convolution layer based on the auxiliary model.
In some embodiments, the convolution kernel determination module 230 may be further configured to: extracting at least one trained convolution kernel corresponding to at least one convolution layer of the auxiliary model; and performing convolution operation on at least one trained convolution kernel to obtain the target convolution kernel.
In some embodiments, the convolution kernel determination module 230 may be further configured to: determining an input matrix, a size of the input matrix being determined based on a size of a convolution kernel of the at least one convolution layer; and inputting the input matrix into the auxiliary model, and extracting the target convolution kernel corresponding to the at least one convolution layer from the auxiliary model through the input matrix. For a detailed description of the convolution kernel determining module 230, reference may be made to step 330, which is not described herein.
Crosstalk determination module 240 may be to: and determining crosstalk information of the equipment to be corrected based on the target convolution kernel, wherein the crosstalk information of the equipment to be corrected is used for correcting crosstalk of the equipment to be corrected. For a detailed description of the process, refer to step 340, which is not described herein.
In some embodiments, crosstalk determination module 240 may also be configured to: and determining the crosstalk coefficient of the corresponding element in at least one direction to the target detection unit based on the difference between the central element in the target convolution kernel and the corresponding element in at least one direction. For a detailed description of the process, refer to steps 410-430, which are not described herein.
In some embodiments, the model correction module 250 may be configured to: and correcting the crosstalk of the projection data to be corrected of the equipment to be corrected by adopting the auxiliary model.
It should be understood that the illustrated system and its modules may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules of the present application may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the crosstalk correction system 200 and the modules thereof is merely for convenience of description and is not intended to limit the present disclosure within the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the present system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, the obtaining module 210, the model determining module 220, the convolution kernel determining module 230, the crosstalk determining module 240, and the model correcting module 250 may share one storage module, and each storage module may be provided separately. Such variations are within the scope of the present application.
Fig. 3 is an exemplary flow chart of a method of correcting crosstalk shown in accordance with some embodiments of the present description.
As shown in fig. 3, the method 300 of correcting crosstalk may include:
step 310, acquiring first projection data and second projection data, where the first projection data is projection data including crosstalk of a device to be corrected, and the second projection data is projection data generated after the crosstalk of the device to be corrected is corrected.
In particular, this step 310 may be performed by the obtaining module 210.
The apparatus may be a radiation scanning device in which crosstalk exists. The radiation scanning device can be an X-ray scanner, a CT device (computed tomography device), a PET-CT device (positron emission computed tomography device) or the like.
The apparatus may be used to scan an object, which may be a physical object, such as a human body, an object, etc., that can be radiographically scanned. The scanning mode can be ordinary scanning or special scanning. In some embodiments, the common scan may include a transverse scan, a coronal scan. In some embodiments, the special scan may include a scout scan, a thin layer scan, a magnification scan, a target scan, a high resolution scan, and the like. In some embodiments, the device may scan the object multiple times from different angles, acquiring scan data from multiple different angles.
The reference object refers to an object for reference as a sample. The reference object may be a phantom. A phantom refers to an object used to simulate the actual object to be scanned. The absorption or scattering of radiation by the phantom may be the same or similar to the object to be scanned. In some embodiments, the mold body may be a non-metallic material or a metallic material, which may include copper, iron, nickel, alloys, and the like; the non-metallic material may include organic materials, inorganic materials, and the like. The size of the motif may be 1CM × 1CM, 2CM × 2CM, 10CM × 10CM, etc., and the size of the motif is not limited in this embodiment. In some embodiments, the shape of the phantom may be a regular shape with a gradient or an irregular shape, such as a circle, an irregular polygon, and the like.
In some embodiments, the projection data may refer to signal data obtained by a detector while the device is scanning an object. For example, a CT device scans an object and the resulting signal data is received by a detector. The projection data comprises pixel data corresponding to a detector, e.g. the device comprises a 3 x 3 array of detectors, and the projection data correspondingly comprises pixel matrix data of size 3 x 3.
Crosstalk refers to mutual interference between pixels of a device detector, that is, X photons that should be received by a certain detection unit will diffuse to an adjacent detector. Cross talk can cause the contrast of the image at tissue boundaries to decrease and even create artifacts, affecting the diagnosis. Crosstalk involves all surrounding pixels. The detector may include a plurality of rows of the detection units, and thus the coefficient of the crosstalk may be two-dimensionally distributed. In some embodiments, the crosstalk coefficients may be inferred based on a number of approximations, e.g. the crosstalk is considered to be mainly a first order gradient field, the crosstalk coefficients of neighboring pixels are equal, and the measurement may be made based on only one dimension, i.e. only one dimension, or in a limited two dimensions. The limited two-dimension only considers the upper, lower, left and right pixels of the current pixel, and does not consider diagonally adjacent pixels.
The device to be corrected is a device without corrected crosstalk, and the device to be corrected contains crosstalk. The standard device is a device that does not contain crosstalk or a device for which crosstalk has been corrected.
In some embodiments, the device to be corrected may be the same type of device or the same device as the standard device. The same type may include the same type of equipment and the same arrangement of detector units.
In some embodiments, the projection data obtained by scanning the reference object based on the device to be corrected may be referred to as first projection data. Projection data obtained by scanning the reference object based on the standard apparatus may be referred to as second projection data. The uncorrected equipment comprises crosstalk, the crosstalk of the standard equipment is corrected, the correspondingly obtained first projection data are projection data to be corrected containing the crosstalk, and the correspondingly obtained second projection data are gold standard projection data without the crosstalk.
In some embodiments, the first projection data and the second projection data may be scanned in the same manner. In some embodiments, the same scanning mode may include scanning at the same angle, the same direction, and the same position of the phantom.
In some embodiments, the first projection data and the second projection data may also be obtained by a simulation method (for example, the second projection data is obtained by simulating the first projection data obtained by scanning, or the first projection data is obtained by simulating the second projection data obtained by scanning).
And 320, taking the first projection data and the second projection data as a training sample pair, and training an initial auxiliary model to obtain an auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer.
In particular, this step 320 may be performed by the model determination module 220.
The initial aided model is an initialized untrained neural network model. The model parameters of the initial secondary model are in an initialized state. After the initial auxiliary model is trained, the auxiliary model can be obtained.
In some embodiments, the initial aiding model may be a convolutional neural network including at least one convolutional layer, as shown at 600 in fig. 6, which may include N convolutional layers, N being an integer greater than or equal to 1. In some embodiments, a convolutional neural network may be constructed in combination with other network structures on the basis of at least one convolutional layer as an initial auxiliary model. For example, the initial aiding model may be a convolutional neural network including an input layer, N convolutional layers, and an output layer. For another example, the initial auxiliary model may be a convolutional neural network including an input layer, M convolutional layers, a fully-connected layer, and an output layer, where M is an integer greater than or equal to 1. And obtaining the auxiliary model which is also a convolutional neural network comprising at least one convolutional layer after the initial auxiliary model is trained.
In some embodiments, the initial aiding model comprises a first activation function for converting input data of the initial aiding model from projection data to target type data and a second activation function for converting output data of the initial aiding model from said target type data to projection data. The target type data may be any data type of data, for example, intensity domain data. The activation function may be any function having a reversible operation, for example, a Linear rectification function relu (rectified Linear unit), a hyperbolic tangent function tanh, an exponential function exp, and the like. The first activation function and the second activation function are inverse operations of each other, e.g., the first activation function is an exponential function exp and the second activation function is a logarithmic function log.
Specifically, as shown in 600 in fig. 6, when the input data of the initial auxiliary model is to-be-corrected data, such as the first projection data, and the model is trained, the sample data type needs to be the target data type, and when the input data type is different from the required target data type, the input to-be-corrected data needs to be converted into the target data type, and input into the N convolutional layers for convolution operation. When outputting data, it is necessary to convert the target data type into a necessary output data type and to use it as output correction data. For example, if the input data is projection data and the target data type is intensity domain data, then the input data may be converted from projection data to intensity domain data using a first activation function, and after training, the output data may be converted from intensity domain data to projection data using a second activation function.
In some embodiments, the first activation function may be an exponential transformation of the input data for converting the input data from projection data to intensity domain data; the second activation function is a logarithmic transformation of the output data for converting the output data from intensity domain data to projection data. Specifically, when converting input data from projection data to intensity domain data, an exponential operation is performed on the input data, the obtained result is used as corresponding intensity domain data, the obtained result is used as sample data for model training, after training, a logarithmic operation is performed on output data, the output data is converted from intensity domain data to projection data, and the obtained result is used as output data of a final model. For example, if the input data is projection data x, the first activation function may be exp (x), and if the input data passes through N convolution layers of the auxiliary model, the output data y is log (y).
The convolution operation of the convolution layer in the initial auxiliary model may be understood as performing convolution operation on the projection data input to the auxiliary model to obtain crosstalk correction data, and obtaining the crosstalk-corrected projection data by superimposing the crosstalk correction data and the projection data input originally, for example, superimposing an intensity value, superimposing a projection data response value, and the like. Therefore, as shown at 600 in fig. 6, in some embodiments, the initial auxiliary model includes a fusion of the input data and the output data of the at least one convolution layer, which is implemented to sum the input data to be corrected and the crosstalk correction data obtained by the convolution operation, so as to obtain the crosstalk-corrected correction data.
In some embodiments, the training of the initial auxiliary model by using the first projection data and the second projection data as training samples refers to training the initial auxiliary model by using the first projection data as an input of the initial auxiliary model and using the second projection data as gold standard data output by a model corresponding to the first projection data, that is, a label of the first projection data. Specifically, in the training process of the initial auxiliary model, first projection data is input to train the initial auxiliary model, and the model corresponding to the input first projection data is output and compared with corresponding second projection data to adjust the parameters of the model. As the initial auxiliary model is trained, various parameters of the model, such as convolution kernels, can be learned.
In some embodiments, the initial auxiliary model may be trained by conventional methods based on training samples to learn model parameters. For example, training may be based on a gradient descent method, a newton method, multiple iterations, or other model methods, etc. In some embodiments, the training is ended when the trained auxiliary model satisfies the preset condition. The preset condition may be that the loss function result converges or is smaller than a preset threshold, etc. Specifically, the loss function may be optimized by adjusting parameters (e.g., parameters such as learning rate, iteration number, and batch size) of the question-answering model, and when the loss function satisfies a preset condition, the training is ended to obtain the auxiliary model. In some embodiments, the method for optimizing the loss function may be implemented By Gradient Descent (BGD), random gradient descent (SGD), and the like.
In some embodiments, the initial auxiliary model may be iteratively updated according to the first projection data, the second projection data, and the loss function to obtain the auxiliary model.
The intermediate convolution kernel refers to a single determined convolution kernel in the training process of the initial auxiliary model, and the single determined convolution kernel corresponds to the target convolution kernel after the training of the initial auxiliary model, and for specific contents of the target convolution kernel, reference may be made to the related description in step 330, which is not described herein again.
The intermediate convolution kernel may be determined from parameters of the initial auxiliary model or the updated model, and in particular may be determined from at least one convolution kernel of the initial auxiliary model, or from at least one convolution kernel of the initial auxiliary model with parameters updated during the training process. For example, when the initial auxiliary model performs training of the first training sample, the intermediate convolution kernel is determined according to at least one convolution kernel of the initial auxiliary model. As another example, during the training process, the initial aiding model updates the parameters and the intermediate convolution kernel is determined based on at least one convolution kernel for which the parameters are updated. The method for determining the intermediate convolution kernel according to the at least one convolution kernel can refer to the related description of step 330, and is not described herein again.
In some embodiments, the loss function may be constructed based on the difference of the actual output of the model and the gold standard data, i.e., the second projection data, corresponding to the input. In some embodiments, the loss function may further include an additional first loss function. The first loss function may be determined from a difference of a sum of elements of the intermediate convolution kernel and a preset value.
The addition value of the elements of the intermediate convolution kernel refers to the addition of the values of the elements of the intermediate convolution kernel after each parameter update. The preset value is a preset value, and specifically, the preset value may be 0. In some embodiments, the difference between the summed value of the elements of the intermediate convolution kernel and the preset value may be an absolute value, a squared difference, or the like of the difference between the summed value and the preset value. Through the embodiment, when the auxiliary model is obtained by training the initial auxiliary model, the first loss function can be minimized, that is, the difference between the sum of the elements of the intermediate convolution kernel and the preset value (for example, 0) is close to 0, so as to update the learning model parameters, so that after the initial auxiliary model is trained, the sum of the elements of the target convolution kernel corresponding to the intermediate convolution kernel is close to 0, on one hand, the training process of the model can be accelerated, and on the other hand, the model parameters obtained by training can be more accurate.
With the present embodiment, at least one candidate convolution kernel of the auxiliary model is learned from the training process of the initial auxiliary model. During training, the first projection data with mechanical deviation is input into the model, and the second projection data with mechanical deviation corrected is used as a gold standard for model output, so that the auxiliary model is obtained through training, and the convolution of at least one candidate convolution kernel obtained through learning represents a calculation process of calculating the input projection data to be corrected to obtain corrected projection data with mechanical deviation corrected. For example, in the calculation process of calculating the corrected projection data from the projection data to be corrected according to an interpolation method, the interpolation method may include a conventional interpolation method such as linear interpolation, lagrangian interpolation, and the like, and for the specific content of calculating the corrected projection data by using the interpolation method, reference may be made to the related description of step 410 in fig. 4, which is not described herein again. When the auxiliary model comprises a plurality of convolution layers or a plurality of candidate convolution kernels, correction calculation can be carried out on projection data to be corrected for multiple times, correction precision is improved, and corresponding model parameters obtained based on the auxiliary model are more accurate.
Step 330, determining a target convolution kernel corresponding to the at least one convolution layer based on the auxiliary model.
In particular, this step 320 may be performed by the convolution kernel determination module 230.
The target convolution kernel refers to a single convolution kernel that is ultimately needed. In particular, the determination may be based on an intermediate convolution kernel of the auxiliary model. For example, when the auxiliary model includes a convolution layer that includes a convolution kernel, the convolution kernel is taken as the target convolution kernel. When the auxiliary model includes a plurality of convolution layers including a plurality of convolution kernels or the auxiliary model includes one convolution layer including a plurality of convolution kernels, one single convolution kernel may be obtained as the target convolution kernel based on the plurality of convolution kernels.
In some embodiments, after the auxiliary model is obtained, trained convolution kernels may be extracted from the convolution layer of the auxiliary model. For example, when the auxiliary model includes a convolutional layer that contains trained convolutional kernels, the trained convolutional kernels can be extracted therefrom. When the auxiliary model includes a plurality of convolutional layers, each of which contains a trained convolution kernel, at least one trained convolution kernel may be extracted from at least one convolutional layer.
In some embodiments, a single convolution kernel is derived as the target convolution kernel based on at least one trained convolution kernel.
In some embodiments, a single convolution kernel is obtained based on at least one trained convolution kernel as the target convolution kernel, and a method of convolving the plurality of convolution kernels of the auxiliary model to obtain a single convolution kernel as the target convolution kernel may be employed. For example, the auxiliary model includes 3 convolution kernels A, B, C of 3 × 3, and the convolution kernels of 3 are a × B × C to obtain a new single convolution kernel of 3 × 3, and the single convolution kernel of 3 × 3 is used as the target convolution kernel.
In some embodiments, a single convolution kernel is obtained as the target convolution kernel based on at least one convolution kernel, and the target convolution kernel may be extracted from the auxiliary model based on at least one candidate convolution kernel by determining a particular input matrix, inputting the input matrix into the auxiliary model, and passing the input matrix through the at least one candidate convolution kernel. The input matrix may be determined based on the size of the candidate convolution kernel. For example only, the size of the candidate convolution kernel is 3 × 3, that is, the size of the corresponding target convolution kernel is also 3 × 3, the size of the input matrix may be 5 × 5, the 5 × 5 input matrix is input to the auxiliary model, the step size during convolution in at least one convolution layer of the auxiliary model is 1, and the auxiliary model outputs a single convolution kernel of 3 × 3, that is, the target convolution kernel. The size of the candidate convolution kernel is 3 × 3, the size of the input matrix can be 7 × 7, the 7 × 7 input matrix is input into the auxiliary model, the step size in convolution in at least one convolution layer of the auxiliary model is 2, and the auxiliary model outputs to obtain a single convolution kernel of 3 × 3, namely the target convolution kernel. In some embodiments, the input matrix may include, but is not limited to, a unit cell, a 2-fold unit cell, a 3-fold unit cell, etc. of a convolution kernel. In some embodiments, only one element in each row of the input matrix is 1 and the remaining elements are 0. Specifically, it may be that the nth element of the nth row of the input matrix is 1 and the remaining elements are 0. Through the embodiment, the input matrix is equal to a pulse impact function, at least one candidate convolution kernel of at least one convolution layer of the auxiliary model can be regarded as a single target convolution kernel, the input matrix equal to the pulse impact function is input into the auxiliary model, and the target convolution kernel can be output and obtained. By the embodiment, the target convolution kernel can be obtained simply and quickly.
Step 340, determining crosstalk information of the device to be corrected based on the target convolution kernel, where the crosstalk information of the device to be corrected is used to correct crosstalk of the device to be corrected.
In particular, this step 320 may be performed by the crosstalk determination module 240.
The crosstalk information is information related to crosstalk, and may include, for example, crosstalk information in the horizontal and vertical directions (coordinate axes XY directions), crosstalk information in the 45-degree direction, crosstalk information in the 135-degree direction, and the like. Specifically, taking the crosstalk information in the horizontal and vertical directions (coordinate axis XY direction) as an example, the crosstalk information may be crosstalk coefficient information of surrounding detectors in the X direction and the Y direction, for example, the crosstalk coefficient of an adjacent detector in the negative axis (i.e., left) in the X direction is 0.4%, and the crosstalk coefficient of an adjacent detector in the negative axis (i.e., right) in the X direction is 0.4%, where the coordinate system may be determined according to the installation requirement of the device.
As can be seen from the foregoing, the trained auxiliary model, the at least one trained convolution kernel obtained by learning, represents the crosstalk correction data obtained by calculating the input projection data to be corrected, and adds the crosstalk correction data to the original input projection data to obtain the crosstalk-corrected projection data. At least one of the trained convolution kernels includes crosstalk information that can be used to correct the device to be corrected. And then, according to the target convolution kernel determined by the at least one trained convolution kernel, the crosstalk information of the equipment to be corrected can be determined. The target convolution kernel determined based on the trained convolution kernels can be used for better crosstalk, and the determined crosstalk information of the equipment to be corrected corresponding to the target convolution kernel can be higher in precision.
The crosstalk information of the device to be corrected determined based on the target convolution kernel may include crosstalk coefficients of detection units of the device to be corrected, for example, crosstalk coefficients of detection units adjacent to a certain detection unit, and the like.
In some embodiments, the crosstalk information of the device to be corrected determined based on the target convolution kernel may include crosstalk coefficients of target detection units of a plurality of detection units of a detector of the device to be corrected. The object detection unit refers to a detection unit that needs to correct crosstalk. The specific content of determining the crosstalk information of the device to be corrected based on the target convolution kernel can refer to the related description of fig. 4, and is not described herein again.
After determining the crosstalk information of the device to be corrected, the crosstalk information can be used for correcting the crosstalk of the device to be corrected. For example, after obtaining the crosstalk coefficient of the target detection unit of the device to be corrected, ideal projection data of the target detection unit, that is, projection data after crosstalk correction may be calculated based on the crosstalk coefficient and projection data to be corrected of the target detection unit. As shown in 700 of fig. 7, (a) is an image schematic diagram obtained before the crosstalk of the device is corrected, and (b) is an image schematic diagram obtained after the crosstalk of the device is corrected, it can be seen that after the crosstalk is corrected, the image imaging quality is better and clearer.
In some embodiments, the trained auxiliary model may be used to correct the crosstalk of the projection data to be corrected of the apparatus to be corrected. This step may be performed by the model correction module 250. Specifically, after an initial auxiliary model is trained by using first projection data and second projection data as a training sample pair to obtain an auxiliary model, projection data of equipment to be corrected are input into the auxiliary model, the projection data are converted to obtain response data of a target data type, the auxiliary model performs convolution operation on an obtained target convolution kernel and the response data to obtain crosstalk correction data, the correction data and the response data are added to obtain crosstalk-corrected data, and the crosstalk-corrected data of the target data type are converted to projection data, so that the corrected projection data can be obtained.
Fig. 4 is an exemplary flow diagram illustrating a method of determining crosstalk information for a device to be corrected based on a target convolution kernel in accordance with some embodiments of the present description.
FIG. 5 is a schematic diagram of an exemplary one detection unit pixel matrix and corresponding target convolution kernel, including an example of a target convolution kernel and an example of a detection unit pixel matrix. As shown in fig. 5, the first projection data of the training sample of the initial auxiliary model is a 3 × 3 pixel matrix formed by 9 detection unit pixels of 1, 2, 3, 4, 5, 6, 7, 8 and N, where N is a detection unit pixel corresponding to the target detection unit. The convolution kernel size of at least one convolution layer of the trained auxiliary model is 3 × 3, which is the same as the pixel matrix size of the input first projection data, and the target convolution kernel size determined based on the convolution kernel of at least one convolution layer of the auxiliary model is also 3 × 3. As shown in FIG. 5, the determined target convolution kernel includes elements k, k1、k2、k3、k4、k5、k6、k7、k8The central element is k, these elements have corresponding values, which in the example of fig. 5 are 0.3%, 0.4%, 0.3%, -2.8% in the order named. The detection unit pixel matrix may include a direction 1 (horizontal direction, i.e. coordinate axis X direction), a direction 2 (vertical direction, i.e. coordinate axis Y direction), a direction 3 (45 degrees to coordinate axis X), and a direction 4 (45 degrees to coordinate axis X)The axis X is oriented at 135 degrees).
In some embodiments, the crosstalk coefficient of the device to be corrected may be determined based on the target convolution kernel, and crosstalk information of the device to be corrected may be determined according to the method 400.
As shown in fig. 4, the method 400 may include:
step 410, determining a crosstalk coefficient of the corresponding element in at least one direction to the target detection unit based on a difference between the central element in the target convolution kernel and the corresponding element in at least one direction.
In particular, step 410 may be performed by crosstalk determination module 240.
The crosstalk coefficient refers to a measure of the magnitude of crosstalk between the probe units, and may be expressed as a percentage of the signal strength of the probe units. For the target detection unit, the crosstalk coefficient of a certain detection unit indicates how much signal corresponding to the self intensity is given to the target detection unit by the detection unit. For example, as shown in fig. 5, the detecting unit 2 is located right above the target detecting unit N at the center and adjacent to the target detecting unit N, and the detecting unit 2 gives a signal of 0.4% to the target detecting unit N, the crosstalk coefficient of the detecting unit 2 with respect to the target detecting unit N is 0.4%. The crosstalk coefficient of the target detection unit indicates how much signal the target detection unit gives to the surrounding detection units as much as its own intensity. For example, as shown in fig. 5, when the target detection unit N at the center is adjacent to 8 detection units and 2.8% of the signal is given to the adjacent units, the crosstalk coefficient of the target detection unit N is-2.8%, and a negative value indicates that the target detection unit gives its own signal to the peripheral units.
At least one direction refers to each direction in the target convolution kernel element array, and may include a direction 1 (lateral direction, i.e., coordinate axis X direction), a direction 2 (longitudinal direction, i.e., coordinate axis Y direction), a direction 3 (45 degree direction from coordinate axis X), and a direction 4 (135 degree direction from coordinate axis X) corresponding to the direction in the detection unit pixel matrix.
The difference between the central element and the corresponding element in at least one direction in the target convolution kernel refers to the difference between the central element and the element in the at least one direction. Specifically, as shown in FIG. 5, the differences may include differences of the center element from elements k4, k5 in direction 1 (k5-k) and (k-k4), differences of the center element from elements k2, k7 in direction 2 (k7-k) and (k-k2), differences of the center element from elements k3, k6 in direction 3 (k6-k) and (k-k3), differences of the center element from elements k1, k8 in direction 4 (k8-k) and (k-k 1).
After determining the difference between the central element and the at least one direction corresponding element in the target convolution kernel, the crosstalk coefficient of the at least one direction corresponding element to the target detection unit may be determined based on the difference. The crosstalk coefficient of the corresponding element in at least one direction to the target detection unit may be expressed as a difference between the center element and the direction element. Specifically, as shown in fig. 5, the detection unit 7 is located right below and adjacent to the target detection unit N, and in the target convolution kernel, an element K7 in the direction 2 corresponds to the detection unit 7, and an element K corresponds to the target detection unit N, so that the crosstalk coefficient of the detection unit 7 with respect to the target detection unit N can be represented as (K7-K).
Step 420, determining a first crosstalk coefficient of the target detection unit in at least one direction based on a sum of the crosstalk coefficients of the corresponding elements in at least one direction to the target detection unit.
In particular, this step 420 may be performed by the crosstalk determination module 240.
The first crosstalk coefficient represents a combination of the crosstalk of the surrounding detection units to the target detection unit, representing a change in the crosstalk coefficient value. Specifically, a first order differential may be obtained for an element in the target convolution kernel, and after determining a crosstalk coefficient of a corresponding element in at least one direction to the target detection unit, the crosstalk coefficients of the corresponding elements in the direction may be added, where the sum is a first crosstalk coefficient (i.e., a first order coefficient) of the target detection unit in the direction. As shown in fig. 5, the crosstalk coefficients of the elements 2 and 7 in the direction 2 to the target detecting unit N are (k7-k) and (k-k2), respectively, and the first crosstalk coefficient of the target detecting unit N in the direction 2 may be ((k7-k) + (k-k2)) ═ (k7-k 2). The first crosstalk coefficient of the target detection unit may be based on a sum of the first crosstalk coefficients in the respective directions for the whole. As shown in fig. 5, the first crosstalk coefficient of the target detection unit N may be calculated by the following formula:
Figure BDA0003025203270000141
step 430, determining a second crosstalk coefficient of the target detection unit in at least one direction based on a difference between the crosstalk coefficients of the corresponding elements in at least one direction to the target detection unit.
In particular, this step 430 may be performed by the crosstalk determination module 240.
The second crosstalk coefficient represents a trend representing a change in the crosstalk coefficient value. Specifically, the second order differential of the elements in the target convolution kernel may be obtained, and after determining the crosstalk coefficient of the corresponding element in at least one direction to the target detection unit, the crosstalk coefficients of the corresponding elements in the direction may be subtracted, where the difference is the second crosstalk coefficient (i.e., the second order coefficient) of the target detection unit in the direction. As shown in fig. 5, the crosstalk coefficients of the elements 2 and 7 for the target detecting unit N in the direction 2 are (k7-k) and (k-k2), respectively, and the second crosstalk coefficient of the target detecting unit N in the direction 2 may be ((k7-k) - (k-k2)) ═ (k7+ k2-2 k). The second crosstalk coefficient of the object detection unit may be based on a sum of the second crosstalk coefficients in the respective directions for the whole. As shown in fig. 5, the second crosstalk coefficient of the target detection unit N can be calculated by the following formula:
Figure BDA0003025203270000142
embodiments of the present specification further provide an apparatus, which includes a processor configured to execute the foregoing method for correcting crosstalk. The method of correcting crosstalk may include: acquiring first projection data and second projection data, wherein the first projection data comprise crosstalk of equipment to be corrected, and the second projection data are projection data of which the crosstalk of the equipment to be corrected is corrected; training an initial auxiliary model by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer; determining a target convolution kernel corresponding to the at least one convolution layer based on the auxiliary model; and determining crosstalk information of the equipment to be corrected based on the target convolution kernel, wherein the crosstalk information of the equipment to be corrected is used for correcting crosstalk of the equipment to be corrected.
Embodiments of the present disclosure also provide a computer-readable storage medium, where the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes the foregoing method for correcting crosstalk. The method of correcting crosstalk may include: acquiring first projection data and second projection data, wherein the first projection data comprise crosstalk of equipment to be corrected, and the second projection data are projection data of which the crosstalk of the equipment to be corrected is corrected; training an initial auxiliary model by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer; determining a target convolution kernel corresponding to the at least one convolution layer based on the auxiliary model; and determining crosstalk information of the equipment to be corrected based on the target convolution kernel, wherein the crosstalk information of the equipment to be corrected is used for correcting crosstalk of the equipment to be corrected.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) the crosstalk coefficient is obtained by learning a small amount of training sample data to obtain a target convolution kernel, the crosstalk correction of the equipment to be corrected is realized based on the determined crosstalk coefficient information, a large amount of training samples are not required for supporting, the practicability is higher, and the crosstalk of the equipment can be corrected more conveniently; (2) determining an additional loss function to train an initial auxiliary model according to the difference between the summation value of the elements of the intermediate convolution kernel and a preset value, on one hand, the training process of the model can be accelerated, and on the other hand, the model parameters obtained by training can be more accurate; (3) the target convolution kernel is extracted by constructing an input matrix and inputting the input matrix into an auxiliary model, so that the target convolution kernel obtained by extraction is more efficient; (4) the constructed auxiliary model adopts a plurality of convolution layers, a target convolution kernel is determined and obtained based on a plurality of convolution kernels of the plurality of convolution layers, the correction calculation precision of crosstalk is higher due to multiple convolutions of the plurality of convolution layers, and the crosstalk information determined by the corresponding obtained target convolution kernel is more accurate; (5) the multidimensional differential operation is constructed into a convolution neural network, convolution is easily expanded to a higher dimensionality, a multidimensional crosstalk coefficient can be obtained by training the network, and dimensionality limitation of crosstalk correction is achieved. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of correcting crosstalk, comprising:
acquiring first projection data and second projection data, wherein the first projection data are projection data including crosstalk of equipment to be corrected, and the second projection data are projection data generated after the crosstalk of the equipment to be corrected is corrected;
training an initial auxiliary model by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, wherein the initial auxiliary model comprises at least one convolutional layer;
determining a target convolution kernel corresponding to the at least one convolution layer based on the auxiliary model;
and determining crosstalk information of the equipment to be corrected based on the target convolution kernel, wherein the crosstalk information of the equipment to be corrected is used for correcting crosstalk of the equipment to be corrected.
2. The method of claim 1, the initial aiding model comprising a first activation function for converting input data of the initial aiding model from projection data to target type data and a second activation function for converting output data of the initial aiding model from the target type data to projection data.
3. The method of claim 2, the first activation function being an exponential transformation of the input data for converting the input data from projection data to intensity domain data; the second activation function is a logarithmic transformation of the output data for converting the output data from intensity domain data to projection data.
4. The method of claim 1, wherein training an initial aided model to obtain an aided model using the first projection data and the second projection data as a training sample pair comprises:
iteratively updating the initial auxiliary model according to the training sample pair and the loss function to obtain the auxiliary model; wherein the loss function includes a first loss function determined according to a difference of a sum of elements of an intermediate convolution kernel determined based on a parameter of the initial auxiliary model or the updated model from a preset value.
5. The method of claim 1, the determining the target convolution kernel for the at least one convolution layer based on the auxiliary model comprising:
extracting at least one trained convolution kernel corresponding to at least one convolution layer of the auxiliary model;
and performing convolution operation on at least one trained convolution kernel to obtain the target convolution kernel.
6. The method of claim 1, the determining the target convolution kernel for the at least one convolution layer based on the auxiliary model comprising:
determining an input matrix, a size of the input matrix being determined based on a size of a convolution kernel of the at least one convolution layer;
and inputting the input matrix into the auxiliary model, and extracting the target convolution kernel corresponding to the at least one convolution layer from the auxiliary model through the input matrix.
7. The method of claim 1, the crosstalk information of the device to be corrected comprising crosstalk coefficients of a target detection unit of a plurality of detection units of a detector of the device to be corrected; the determining the crosstalk information of the device to be corrected based on the target convolution kernel includes:
and determining the crosstalk coefficient of the corresponding element in at least one direction to the target detection unit based on the difference between the central element in the target convolution kernel and the corresponding element in at least one direction.
8. A system for correcting crosstalk, comprising:
an acquisition module: the device comprises a processing unit, a processing unit and a processing unit, wherein the processing unit is used for acquiring first projection data and second projection data, the first projection data are projection data including crosstalk of equipment to be corrected, and the second projection data are projection data generated after the crosstalk of the equipment to be corrected is corrected;
a model determination module: the initial auxiliary model is trained by taking the first projection data and the second projection data as a training sample pair to obtain an auxiliary model, and the initial auxiliary model comprises at least one convolutional layer;
a convolution kernel determination module; a target convolution kernel for determining the at least one convolution layer corresponding to the auxiliary model;
a crosstalk determination module: the device to be corrected is used for determining crosstalk information of the device to be corrected based on the target convolution kernel, and the crosstalk information of the device to be corrected is used for correcting crosstalk of the device to be corrected.
9. An apparatus for correcting crosstalk, comprising at least one storage medium and at least one processor, the at least one storage medium for storing computer instructions; the at least one processor is configured to execute the computer instructions to implement the method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN202110414435.0A 2021-04-16 2021-04-16 Method and system for correcting crosstalk Active CN112991228B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110414435.0A CN112991228B (en) 2021-04-16 2021-04-16 Method and system for correcting crosstalk
PCT/CN2022/087408 WO2022218438A1 (en) 2021-04-16 2022-04-18 Calibration methods and systems for imaging field
US18/488,012 US20240070918A1 (en) 2021-04-16 2023-10-16 Calibration methods and systems for imaging field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110414435.0A CN112991228B (en) 2021-04-16 2021-04-16 Method and system for correcting crosstalk

Publications (2)

Publication Number Publication Date
CN112991228A true CN112991228A (en) 2021-06-18
CN112991228B CN112991228B (en) 2023-02-07

Family

ID=76340929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110414435.0A Active CN112991228B (en) 2021-04-16 2021-04-16 Method and system for correcting crosstalk

Country Status (1)

Country Link
CN (1) CN112991228B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022218438A1 (en) * 2021-04-16 2022-10-20 Shanghai United Imaging Healthcare Co., Ltd. Calibration methods and systems for imaging field

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004024659A (en) * 2002-06-27 2004-01-29 Hitachi Medical Corp X-ray ct apparatus
CN1657011A (en) * 2005-03-22 2005-08-24 东软飞利浦医疗设备系统有限责任公司 X-ray computerised tomograph capable of automatic eliminating black false image
US8471921B1 (en) * 2008-06-23 2013-06-25 Marvell International Ltd. Reducing optical crosstalk and radial fall-off in imaging sensors
CN104318536A (en) * 2014-10-21 2015-01-28 沈阳东软医疗系统有限公司 Method and device for CT image correction
CN107330949A (en) * 2017-06-28 2017-11-07 上海联影医疗科技有限公司 A kind of artifact correction method and system
CN107862670A (en) * 2017-11-30 2018-03-30 电子科技大学 A kind of image recovery method for infrared imaging electrical crosstalk
CN110349236A (en) * 2019-07-15 2019-10-18 上海联影医疗科技有限公司 A kind of method for correcting image and system
CN110555834A (en) * 2019-09-03 2019-12-10 明峰医疗系统股份有限公司 CT bad channel real-time detection and reconstruction method based on deep learning network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004024659A (en) * 2002-06-27 2004-01-29 Hitachi Medical Corp X-ray ct apparatus
CN1657011A (en) * 2005-03-22 2005-08-24 东软飞利浦医疗设备系统有限责任公司 X-ray computerised tomograph capable of automatic eliminating black false image
US8471921B1 (en) * 2008-06-23 2013-06-25 Marvell International Ltd. Reducing optical crosstalk and radial fall-off in imaging sensors
CN104318536A (en) * 2014-10-21 2015-01-28 沈阳东软医疗系统有限公司 Method and device for CT image correction
CN107330949A (en) * 2017-06-28 2017-11-07 上海联影医疗科技有限公司 A kind of artifact correction method and system
CN107862670A (en) * 2017-11-30 2018-03-30 电子科技大学 A kind of image recovery method for infrared imaging electrical crosstalk
CN110349236A (en) * 2019-07-15 2019-10-18 上海联影医疗科技有限公司 A kind of method for correcting image and system
CN110555834A (en) * 2019-09-03 2019-12-10 明峰医疗系统股份有限公司 CT bad channel real-time detection and reconstruction method based on deep learning network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
K. DIJKSTRA ET.AL: "Hyperspectral demosaicking and crosstalk correction using deep learning", 《MACHINE VISION AND APPLICATIONS》 *
周日峰 等: "高分辨率CCD辐射探测器串扰校正", 《原子能科学技术》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022218438A1 (en) * 2021-04-16 2022-10-20 Shanghai United Imaging Healthcare Co., Ltd. Calibration methods and systems for imaging field

Also Published As

Publication number Publication date
CN112991228B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
US11430162B2 (en) System and method for image conversion
US11188773B2 (en) Systems and methods for detecting region of interset in image
US10387765B2 (en) Image correction using a deep generative machine-learning model
US10980505B2 (en) System and method for air correction
US20220392072A1 (en) System and method for splicing images
CN113096211B (en) Method and system for correcting scattering
CN115605915A (en) Image reconstruction system and method
US11069096B2 (en) Method for processing parameters of a machine learning method and reconstruction method
CN110766768A (en) Magnetic resonance image reconstruction method, device, equipment and medium
US11393139B2 (en) System and method for MPR streak reduction
CN111179372A (en) Image attenuation correction method, device, computer equipment and storage medium
CN107862665B (en) CT image sequence enhancement method and device
EP3404615A1 (en) Iterative image reconstruction/de-noising with artifact reduction
CN111626379B (en) X-ray image detection method for pneumonia
EP4148745A1 (en) Systems and methods for image evaluation
KR20190112219A (en) Method for detection of alzheimer's disease, system for detection of alzheimer's disease, and computer-readable medium storing program for method thereof
CN114387317A (en) CT image and MRI three-dimensional image registration method and device
CN112991228B (en) Method and system for correcting crosstalk
CN112734877B (en) Method and system for correcting artifacts
JP6645442B2 (en) Information processing apparatus, information processing method, and program
US20230360312A1 (en) Systems and methods for image processing
CN113100802A (en) Method and system for correcting mechanical deviation
Li et al. An efficient augmented lagrangian method for statistical X-ray CT image reconstruction
Shi et al. Atlas construction via dictionary learning and group sparsity
US20230162331A1 (en) Systems and methods for image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant