WO2020087780A1 - 电子计算机断层扫描前端设备、系统、方法及存储介质 - Google Patents

电子计算机断层扫描前端设备、系统、方法及存储介质 Download PDF

Info

Publication number
WO2020087780A1
WO2020087780A1 PCT/CN2019/071198 CN2019071198W WO2020087780A1 WO 2020087780 A1 WO2020087780 A1 WO 2020087780A1 CN 2019071198 W CN2019071198 W CN 2019071198W WO 2020087780 A1 WO2020087780 A1 WO 2020087780A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
batch
neural network
emission
reconstructed image
Prior art date
Application number
PCT/CN2019/071198
Other languages
English (en)
French (fr)
Inventor
胡战利
梁栋
李思玥
杨永峰
刘新
郑海荣
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020087780A1 publication Critical patent/WO2020087780A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/40Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with arrangements for generating radiation specially adapted for radiation diagnosis
    • A61B6/4007Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with arrangements for generating radiation specially adapted for radiation diagnosis characterised by using a plurality of source units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/502Clinical applications involving diagnosis of breast, i.e. mammography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • A61B6/5264Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention belongs to the field of medical technology, and particularly relates to an electronic computer tomography front-end device, system, method, and storage medium.
  • Digital breast tomography Digital Breast Tomosynthesis, DBT
  • DBT Digital Breast Tomosynthesis
  • the rapid movement of the X-ray emission source will magnify the effective focus, resulting in a reduction in the spatial resolution of the reconstructed image.
  • the X-ray emission source moves stepwise, and the rapid acceleration and deceleration during this period will cause a certain amplitude of mechanical vibration, resulting in image motion artifacts. In this way, the quality of the reconstructed image cannot be guaranteed, and a reconstructed image with high reference value cannot be provided.
  • the object of the present invention is to provide an electronic computer tomography front-end device, system, method, and storage medium, aiming to solve the existing problems in the prior art, which are caused by the use of a step-by-step moving single emission source and the quality of the reconstructed image is not high problem.
  • the present invention provides an electronic computer tomography CT front-end device, the device includes: an emission source,
  • the emission source includes: a plurality of emission units arranged in a predetermined arrangement manner and sequentially performing corresponding scanning actions in a controlled manner, and the emission direction of the emission units passes through the scanning center.
  • the transmitting units are arranged in an arc shape, and the center of the circle corresponding to the arc shape corresponds to the scanning center.
  • the radius corresponding to the arc is 10-150 cm, and / or, the central angle corresponding to the arc segment between two adjacent emitting units is 5-50 degrees.
  • the emission source includes 15 emission units, the radius corresponding to the arc is 65 cm, and the central angle corresponding to the arc segment between two adjacent emission units is 5 degrees.
  • the emission unit uses a carbon nanotube cathode
  • the emission source also includes:
  • a conductive strip provided on the base and used to carry the launch unit and realize electrical communication of the launch unit, and the conductive strip is assembled with the launch unit by a screw.
  • the present invention provides a CT system, the system includes: the CT front-end device and the workstation as described above,
  • the CT front-end equipment also includes:
  • a detector used to obtain a corresponding projected image when the transmitting unit performs a scanning action
  • the workstation includes a memory and a processor, and the processor implements the following steps when executing the computer program stored in the memory:
  • using a deep learning method to identify the lesion in the reconstructed image specifically includes the following steps:
  • the initial image is input to a deep learning neural network to identify the lesion, and specifically includes the following steps:
  • the residual convolutional neural network includes a convolutional network layer, an activation function network layer and a batch normalization network layer,
  • the adjustment standard is used to process the batch of standard data to obtain batch adjustment data having the same or similar distribution as the input batch data for output.
  • the present invention also provides a method for identifying a lesion in a breast, the method is based on the system as described above, the projection image is a projection image of a breast, the recognition result indicates whether a lesion exists in the breast, the method It includes the following steps:
  • the deep learning method is used to identify the lesions in the reconstructed image.
  • the present invention also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method are implemented.
  • the emission source of the CT front-end device includes: a plurality of emission units arranged in a predetermined arrangement and sequentially controlled to perform corresponding scanning actions, and the emission direction of the emission units passes through the scanning center.
  • the multiple transmitting units can be controlled to perform the scanning action quickly in sequence, without the need for a single transmitting unit to adopt stepwise movement to collect projection images of different projection angles, thereby saving the moving time of the transmitting unit and ensuring reconstruction
  • the spatial resolution of the image meets the requirements, it is also possible to quickly perform scanning operations, and to avoid the problem of image motion artifacts caused by the rapid acceleration and deceleration in the stepped movement, thereby ensuring the quality of the reconstructed image while scanning quickly.
  • FIG. 1 is a schematic structural diagram of a CT front-end device according to Embodiment 1 of the present invention.
  • FIG. 2 is a schematic diagram of a first arrangement of transmitting units in Embodiment 1 of the present invention.
  • FIG. 3 is a schematic diagram of a second arrangement of transmitting units in Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of a third arrangement of transmitting units in Embodiment 1 of the present invention.
  • Embodiment 3 of the present invention is a schematic structural diagram of an emission source in Embodiment 3 of the present invention.
  • FIG. 6 is a schematic structural diagram of a CT system provided by Embodiment 4 of the present invention.
  • FIG. 7 is a processing flowchart of a workstation provided in Embodiment 5 of the present invention.
  • Embodiment 8 is a schematic structural diagram of a deep learning neural network in Embodiment 5 of the present invention.
  • Embodiment 9 is a schematic structural diagram of a residual convolutional neural network in Embodiment 6 of the present invention.
  • FIG. 10 is a flowchart of processing of a batch normalized network layer in Embodiment 6 of the present invention.
  • FIG. 11 is a flowchart of a method for identifying breast lesions according to Embodiment 7 of the present invention.
  • FIG. 12 is a schematic structural diagram of a deep learning neural network according to an application example of the present invention.
  • FIG. 1 shows an electronic computer tomography (CT) front-end device provided in Embodiment 1 of the present invention.
  • the CT front-end device is mainly used to collect CT projection images of corresponding parts of the human body, such as breast projection images and brain projection images , Liver projection images, etc., for ease of description, only the relevant parts of the embodiment of the present invention are shown, detailed as follows:
  • the CT front-end device includes: a device base 101, and an emission source 102, a detector 103, a control circuit board, a display, a console, a network module, etc., located on the device base 101.
  • the device base 101 may specifically include a compression plate 104;
  • the emission source 102 may be an X-ray emission source, a gamma-ray emission source, etc.
  • the emission source 102 may be carried by a C-shaped arm 105 and electrically communicate with other components;
  • the detector 103 It can be a flat panel detector, and the corresponding projection image can be obtained when the transmitting unit of the emitting source 102 performs a scanning action.
  • the flat panel detector is usually set in conjunction with the carrier table, and a carbon fiber board for electromagnetic shielding is usually set on the carrier table; control
  • the circuit board plays the role of controlling the entire device.
  • the control circuit board can adopt a distributed control mode or a total control mode, which can control the work of other components.
  • the control circuit board can be provided with corresponding processors and memories, etc. Users control operations such as scanning and image acquisition; the console can also be combined with a display to form a touch screen; the network module is used for equipment to interact with workstations, clouds, etc.
  • the emission source is set corresponding to the detector. When the emission source 102 performs X-ray or ⁇ -ray emission for a scanning operation (for radiation, the scanning operation may also be referred to as an exposure operation), the detector 103 may detect the corresponding projection image.
  • the emission source 102 includes: a plurality of emission units 201 arranged in a predetermined arrangement and sequentially controlled to perform corresponding scanning actions, wherein the emission direction of the emission unit 201 passes through the scanning center A.
  • the emission unit 201 may be an X-ray emission unit, a gamma-ray emission unit, or the like.
  • the so-called predetermined arrangement method can not only be arranged in an arc shape as shown in FIG. 2, but also can be arranged in a linear manner as shown in FIG. 3 or in a staggered arrangement as shown in FIG. 4. As long as it can meet the requirements of fast scanning and the detector 103 can obtain the projection image whose quality can be guaranteed.
  • a plurality of emission units 201 are integrated into the emission source 102 in a predetermined arrangement manner to form an emission unit array.
  • the cathodes of the emitting units 201 at different positions correspondingly produce X-ray focal spots at different positions, and the detector 103 can obtain projection images of different viewing angles, which are then obtained from the projection images CT reconstructed image.
  • the multiple transmitting units 201 can be controlled to quickly perform scanning actions in sequence without the need for a single transmitting unit to adopt stepwise movement to collect projection images of different projection angles, thereby saving the moving time of the transmitting unit, While ensuring that the spatial resolution of the reconstructed image meets the requirements, it is also possible to quickly perform the scanning action, and avoid the problem of image motion artifacts caused by the rapid acceleration and deceleration in the stepped movement, thereby ensuring the reconstruction while scanning quickly Image Quality.
  • the emitting units 201 are arranged in an arc shape, and the center of the circle corresponding to the arc shape of the arrangement corresponds to the scanning center A. In this way, the emitting direction of the emitting unit 201 passes through the scanning center A, and the arc center of the emitting unit 201 corresponds to the scanning center A.
  • the setting of each emitting unit 201 is similar and the imaging effects at different angles are consistent.
  • the radius R corresponding to the above-mentioned arc shape may be 10-150 cm, and / or, the central angle ⁇ corresponding to the arc-shaped section between two adjacent emitting units 201 may be 5-50 degrees.
  • the transmitting units 201 may be uniformly distributed on the entire arc corresponding to the above-mentioned arc, and of course, may be non-uniformly distributed as needed.
  • the emission source 102 may include 15 emission units 201, the radius corresponding to the arc is 65 cm, and the central angle corresponding to the arc segment between two adjacent emission units 201 is 5 degrees. These parameters are taken The value is related to the number of projected images to be imaged, the projection angle, the volume of space occupied by the transmitting unit 201, and so on.
  • this embodiment further provides the following content:
  • the emission unit 201 uses a carbon nanotube (CNT) cathode, and the emission source 102 further includes:
  • a conductive strip 502 provided on the base 501 for carrying the launching unit 102 and achieving electrical communication between the launching unit 201, and the conducting strip 502 and the launching unit 201 are assembled by a screw.
  • CNT has stable chemical properties, extremely large aspect ratio and other characteristics, and is an ideal field emission material.
  • the X-ray emission source 102 based on the CNT cathode can realize high time resolution and programmable X-ray emission, and has the characteristics of miniaturization, low power consumption, long life and fast ignition.
  • Each emitting unit 201 is a separately packaged glass bulb, and each glass bulb includes: a CNT cathode, a grid, a focusing electrode, and an anode target.
  • the CNT cathode, grid and focusing electrode can be designed into an integrated structure of electron emission.
  • the anode target and the conductive base are processed together. When packaging the glass bulb, only the fixed target and the integrated structure of electron emission need to be fixed.
  • the existing hot cathode bulb packaging process is conducive to improving bulb packaging efficiency and yield.
  • a metal cover is designed to shield the secondary electrons, reducing the probability of ignition of the glass bulb.
  • the arc-shaped conductive strip 502 is designed to be installed on the arc-shaped base 501 in advance.
  • the conductive strip 502 can be made of copper.
  • a threaded mounting hole is reserved, embedded in the processed screw, and then the glass bulb is installed on the screw, and the anode high voltage connection is reserved on the arc-shaped conductive strip 502
  • the hole, that is, the conductive strip 502 can simultaneously play the role of conducting and supporting and fixing.
  • the distance between the glass bulb and the base 501 is about 60 cm (as shown by R ′ in FIG.
  • the distance to the detector 103 is designed to be about 65 cm, which is consistent with the above-mentioned arc radius R, so that the distance between each transmitting unit 201 on the arc array and the center of the detector 103 is the same.
  • FIG. 6 shows the structure of the CT system provided by Embodiment 4 of the present invention.
  • the CT front-end device 601 as described in the foregoing embodiments and Workstation 602.
  • the workstation 602 and the CT front-end device 601 can be connected through a network, or the workstation 602 and the CT front-end device 601 are integrated in a physical entity, and the function of the workstation 602 can be implemented by corresponding software or hardware.
  • the workstation 602 includes a memory 6021 and a processor 6022.
  • the processor 6022 executes the computer program 6023 stored in the memory 6021, the steps in the following method are implemented:
  • each projection image to obtain a reconstructed image, and use deep learning methods to identify the lesions in the reconstructed image.
  • any suitable deep learning method can be used to identify the lesions in the reconstructed image, for example: regional convolutional neural network (Regions with Convolutional Neural Network, R-CNN), fast regional convolutional neural network Network (Fast R-CNN), multi-class single shot detector (Single Shot MultiBox Detector, SSD), etc.
  • the use of artificial intelligence (Artificial Intelligence, AI) diagnosis of medical images can assist doctors to improve screening efficiency and effectively reduce the probability of missed diagnosis and misdiagnosis.
  • AI Artificial Intelligence
  • this embodiment further provides the following content:
  • each projection image is processed to obtain a reconstructed image.
  • the projection reconstruction technology involved in this step is to use X-rays, ultrasonic waves, etc. to form a perspective projection image through the scanned object (such as human internal organs, underground ore body), and calculate a tomogram of the recovered object according to the perspective projection image to obtain a reconstructed image ,
  • the reconstructed image may be several slice images.
  • This reconstruction technique is based on scanning through X-rays and ultrasound. Due to the different absorption when passing through different structures of the scanned object, which causes different projection intensity on the imaging surface, the internal structure distribution of the scanned object is obtained by inversion. image.
  • step S702 pre-process the reconstructed image to obtain an initial image.
  • preprocessing may involve cropping the image to reduce redundant calculations.
  • the initial image is input to the deep learning neural network to identify the lesion, and a recognition result is obtained.
  • the deep learning neural network architecture may be as shown in FIG. 8 and specifically includes: a convolution subnetwork 801, a candidate frame subnetwork 802, and a fully connected subnetwork 803. Among them, each sub-network processing is roughly as follows:
  • the convolution subnetwork 801 can perform feature extraction processing on the initial image to obtain a convolution feature image.
  • the convolution subnetwork 801 may include a multi-segment convolutional neural network, and each segment of the convolutional neural network may use a residual convolutional neural network to alleviate the problems of gradient disappearance and gradient explosion, or non-residual convolution
  • a convolutional neural network of course, the convolution subnetwork 801 may also use a combination of a non-residual convolutional neural network and a residual convolutional neural network.
  • the candidate frame sub-network 802 can determine candidate regions for the convolutional feature image, and correspondingly obtain a fully connected feature map.
  • the candidate frame sub-network 802 may adopt a sliding window of a predetermined size, and based on the center point of each sliding window, generate a predetermined number of candidate frames with a predetermined size on the initial image. The center point of the sliding window corresponds.
  • candidate regions corresponding to each candidate frame can be obtained. Each candidate region correspondingly generates a candidate region feature map.
  • Candidate region feature maps can also be pooled accordingly to obtain fully connected feature maps.
  • the fully connected sub-network 803 may perform classification and other processing based on the fully connected feature map to obtain a recognition result, and the recognition result indicates whether a lesion exists.
  • the two branches of the fully-connected sub-network 803 may be respectively subjected to corresponding classification, regression and other processing.
  • the corresponding fully-connected sub-network 803 may correspondingly include a classification network layer and a regression network layer.
  • the classification network layer can be used to determine whether the candidate area is the foreground or the background according to the fully connected feature map, that is, whether there is a lesion in the candidate area.
  • the regression network layer can be used to modify the coordinates of the candidate frame and finally determine the location of the lesion.
  • this embodiment further provides the following content:
  • the convolution subnetwork 801 several residual convolutional neural networks may be used to extract features from the initial image, and the residual convolutional neural network may include multiple network layers as shown in FIG. 9: a convolutional network layer 901 , Activation function network layer 902 and batch normalization network layer 903. Among them, each network layer processing is roughly as follows:
  • the convolutional network layer 901 may use a preset convolution kernel to perform convolution processing on the input image.
  • the activation function network layer 902 may utilize an S-type (Sigmoid) function, a hyperbolic tangent (Tahn) function, or a rectified linear unit (ReLU) function to perform activation processing.
  • Sigmoid S-type
  • Tihn hyperbolic tangent
  • ReLU rectified linear unit
  • the batch normalization network layer 903 can not only realize the traditional standardization process, but also enable the network to accelerate convergence and further alleviate the problems of gradient disappearance and gradient explosion.
  • the processing of the batch normalized network layer 903 may specifically include the steps shown in FIG. 10:
  • step S1001 the input batch data processed through the convolutional network layer 501 are averaged.
  • step S1002 the variance of the batch data is calculated based on the mean.
  • step S1003 the batch data is standardized according to the mean and variance to obtain batch standard data.
  • step S1004 the batch standard data is processed using an adjustment factor to obtain batch adjustment data having the same or similar distribution as the input batch data for output.
  • the adjustment factor has a corresponding initial value during initialization, and then based on the initial value, the adjustment factor can be trained together with the parameters processed by the network layer in the reverse transmission, so that the adjustment factor can learn the input batch The distribution of data. After the input batch data is processed by batch normalization, the distribution of the original input batch data remains.
  • This embodiment further provides a method for identifying breast lesions on the basis of the systems of the foregoing embodiments, and specifically includes the steps shown in FIG. 11:
  • step S1101 each projection image is processed to obtain a reconstructed image.
  • step S1102 the deep learning method is used to identify the lesion in the reconstructed image.
  • each step may be similar to the content described in the corresponding positions in the foregoing embodiments, and will not be repeated here.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented, for example, as shown in FIG. The steps S1101 to S1102 shown.
  • the functions described in the foregoing system embodiments are realized, for example, the functions of the aforementioned deep learning neural network.
  • the computer-readable storage medium in the embodiments of the present invention may include any entity or device capable of carrying computer program code, and a recording medium, such as ROM / RAM, magnetic disk, optical disk, flash memory, and other memories.
  • the deep learning neural network can be used to identify the lesions (calcifications) in the breast, and can specifically include the architecture shown in FIG. 12:
  • the entire deep learning neural network includes: a convolution subnetwork 801, a candidate frame subnetwork 802, and a fully connected subnetwork 803.
  • the convolution subnetwork 801 includes a first segment convolutional neural network 1201, a pooling layer 1202, a second segment convolutional neural network 1203, a third segment convolutional neural network 1204, and a fourth segment convolutional neural network 1205.
  • the first segment of the convolutional neural network 1201 uses a non-residual convolutional neural network
  • the second segment of the convolutional neural network 1203, the third segment of the convolutional neural network 1204, and the fourth segment of the convolutional neural network 1205 use residual convolution Product neural network.
  • the residual convolutional neural network includes multiple network layers, which are still shown in FIG. 9: a convolutional network layer 901, an activation function network layer 902, and a batch normalization network layer 903.
  • the candidate frame sub-network 802 includes: a regional candidate network (Region Proposal Network, RPN) 1206 and a regional pooling network 1207.
  • RPN Regional Proposal Network
  • the fully connected sub-network 803 includes: a classification network layer 1208 and a regression network layer 1209.
  • a fifth segment convolutional neural network 1211 is also included between the candidate box subnetwork 802 and the fully connected subnetwork 803.
  • a mask network layer 1210 is also set.
  • the reconstructed image obtained from the processing of each projection image obtains an initial image of size 224 ⁇ 224.
  • the reconstructed image referred to here is usually a slice image.
  • the initial image is input to the first segment of the convolutional neural network 1201 for initial feature extraction of the convolution calculation.
  • the resulting feature map is processed by the pooling layer 1202 and then output to the second segment of the convolutional neural network 1203 and the third
  • the segment convolutional neural network 1204 and the fourth segment convolutional neural network 1205 perform further feature extraction.
  • the size of the convolution kernel used in the first-stage convolutional neural network 1201 for convolution calculation is 7 ⁇ 7, and the step size is 2, which can reduce the data size by half.
  • the size of the feature map output by the first-stage convolutional neural network 1201 is 112 ⁇ 112. After the feature map output by the first segment of the convolutional neural network 1201 is processed by the pooling layer 1202, the size of the feature map is 56 ⁇ 56.
  • the convolutional network layer 901 in the used residual convolutional neural network can be calculated using the following formula (1):
  • i, j are the pixel coordinate positions of the input image
  • I is the input image data
  • K is the convolution kernel
  • p, n are the width and height of the convolution kernel
  • S (i, j) is the output convolution data .
  • the batch normalized network layer 903 can perform the following calculations:
  • the input batch data ⁇ x 1 ... m is the output data of the convolutional network layer 901.
  • n is the total number of data.
  • is a small positive number to avoid the divisor being zero.
  • is the scaling factor and ⁇ is the translation factor.
  • the adjustment factors ⁇ and ⁇ have corresponding initial values during initialization.
  • the initial value of ⁇ is approximately equal to 1
  • the initial value of ⁇ is approximately equal to 0, and then based on This initial value, the adjustment factors ⁇ and ⁇ can be trained together with the parameters processed by the network layer in the reverse transmission, so that ⁇ and ⁇ learn the distribution of the input batch data, and the input batch data is batch normalized After processing, the distribution of the batch data originally entered is still retained.
  • the activation function network layer 902 can perform the calculation shown in the following formula (6):
  • x is the output data of the batch normalized network layer 903
  • f (x) is the output of the activation function network layer 902.
  • the above three operations of the convolutional network layer 901, the activation function network layer 902, and the batch normalization network layer 903 can form a neural network block.
  • the second segment of the convolutional neural network 1203 has three neural network blocks. Among them, the size of the convolution kernel used in one neural network block is 1 ⁇ 1, and the number of convolution kernels is 64; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 64; there is also a convolution kernel size used in the neural network block of 1 ⁇ 1, and the number of convolution kernels is 256.
  • the third segment of the convolutional neural network 1204 has 4 neural network blocks, of which the size of the convolution kernel used in one neural network block is 1 ⁇ 1 and the number of convolution kernels is 128; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 128; there is also a convolution kernel size used in the neural network block of 1 ⁇ 1, and the number of convolution kernels is 512.
  • the fourth segment of the convolutional neural network 1205 has 23 neural network blocks, of which the size of the convolution kernel used in one type of neural network block is 1 ⁇ 1, and the number of convolution kernels is 256; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 256; the size of the convolution kernel used in another neural network block is 1 ⁇ 1, and the number of convolution kernels is 1024. Finally, through the first to fourth convolutional neural networks, the output convolution feature image is 14 ⁇ 14 ⁇ 1024, indicating that the output convolution feature image size is 14 ⁇ 14, and the number of convolution kernels is 1024.
  • the convolution feature image processed by the convolution sub-network 801 is then input into the RPN 1206 and the regional pooling network 1207 for corresponding processing.
  • RPN1206 is used to extract candidate regions. Specifically, a sliding window with a predetermined size of 3 ⁇ 3 is used. Based on the center point of each sliding window, a predetermined number of 9 candidate frames with a predetermined size are generated on the initial image. Each candidate The center point of the frame corresponds to the center point of the sliding window. Correspondingly, candidate regions corresponding to each candidate frame can be obtained. Each candidate region correspondingly generates a candidate region feature map.
  • the output convolutional feature image is 14 ⁇ 14 ⁇ 1024
  • the predetermined size of the sliding window is 3 ⁇ 3
  • the predetermined number of candidate frames is 9, then 256 can be obtained accordingly
  • candidate region feature maps that is, 256-dimensional fully connected features.
  • the area size of some candidate frames is the same, and the area size of this partial candidate frame is different from the area size of other partial candidate frames.
  • the area and aspect ratio of the candidate frames can be obtained according to the settings.
  • the area pooling network 1207 is used to pool the candidate area feature map into a fixed-size pooled feature map according to the position coordinates of the candidate frame.
  • the regional pooling network 1207 can be RoiAlign network.
  • the candidate box is derived from the regression model, which is generally a floating-point number.
  • the RoiAlign network does not quantize floating-point numbers. For each candidate box, divide the candidate region feature map into 7 ⁇ 7 units, fix four coordinate positions in each unit, calculate the values of the four positions by bilinear interpolation, and then perform the maximum pooling operation . For each candidate box, a pooled feature map of 7 ⁇ 7 ⁇ 1024 is obtained, and all pooled feature maps constitute the initial fully connected feature map.
  • the fifth segment convolutional neural network 1211 has 3 neural network blocks, of which the size of the convolution kernel used in one neural network block is 1 ⁇ 1 and the number of convolution kernels is 512; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 512; there is also a convolution kernel size used in the neural network block of 1 ⁇ 1, and the number of convolution kernels is 2048.
  • the final fully-connected feature map processed by the fifth-stage convolutional neural network 1211 enters three branches of the fully-connected sub-network 803: a classification network layer 1208, a regression network layer 1209, and a mask network layer 1210.
  • the classification network layer 1208 is used to input the final fully connected feature map processed by the fifth segment convolutional neural network 1211, and to judge whether the candidate area is the foreground or the background, and the output is an array of 14 ⁇ 14 ⁇ 18, where “18 "Means that the nine candidate boxes will output both foreground and background results.
  • the regression network layer 1209 is used to predict the coordinates, height and width of the center anchor point of the candidate frame, and to modify the coordinates of the candidate frame.
  • the output is 14 ⁇ 14 ⁇ 36, where “36” represents the four endpoint values of the nine candidate frames.
  • the mask network layer 1210 uses a 2 ⁇ 2 convolution kernel of a certain size to upsample the feature map of the candidate area that is determined to be a calcification and has undergone position correction, to obtain a 14 ⁇ 14 ⁇ 256 feature map.
  • the convolution process obtains a 14 ⁇ 14 ⁇ 2 feature map, which is then masked to segment the foreground and background.
  • the number of categories is 2, indicating the presence or absence of breast calcifications.
  • the location of the calcifications can be further obtained.
  • the calculation of the classification network layer loss function used in the fully connected sub-network 803 to optimize the classification is shown in the following formula (7), which is used to optimize the regression when the classification result is the presence of calcified foci
  • the calculation of the regression network layer loss function is shown in the following formula (8).
  • the value of b is (ti-ti '), ti is the predicted coordinate, and ti' is the real coordinate.
  • the optimization processing of the mask processing may involve: in the classification processing, the cross entropy is calculated after the activation function Sigmoid processing.

Abstract

一种电子计算机断层扫描前端设备、系统、方法及存储介质,前端设备的发射源包括:若干呈预定排布方式排布且依次受控执行相应扫描动作的发射单元(201),所述发射单元(201)的出射方向经过扫描中心。这样,所实现的多个发射单元可受控依次快速执行扫描动作,而无需单个发射单元采用步进式移动来采集不同投影角度的投影图像,从而节约了发射单元的移动时间,能在保证重建图像空间分辨率达到要求的同时,也可能快速进行扫描动作,并且避免了因步进式移动中急剧加减速而导致图像运动伪影的问题,进而在快速扫描的同时保障了重建图像质量。

Description

电子计算机断层扫描前端设备、系统、方法及存储介质 技术领域
本发明属于医疗技术领域,尤其涉及一种电子计算机断层扫描前端设备、系统、方法及存储介质。
背景技术
数字乳腺断层成像(Digital Breast Tomosynthesis,DBT)技术是在传统体层摄影的几何原理基础上,结合数字影像处理技术开发的新型体层成像技术,可通过单X射线或γ射线发射源环绕乳腺步进式快速移动,从而从不同角度对乳腺进行快速采集,获取不同投影角度下的小剂量投影数据,可重建出与探测器平面平行的乳腺任意深度层面X线密度影像,具有辐射剂量小,可获得任意层面影像,可进一步处理显示三维信息等特点。
但是,X射线发射源的快速运动将放大有效焦点,导致重建图像的空间分辨率降低。另外,X射线发射源步进式移动,期间的急剧加速和减速将引起一定幅度的机械振动,导致图像运动伪影。这样,重建图像的质量得不到保障,无法提供具有高参考价值的重建图像。
发明内容
本发明的目的在于提供一种电子计算机断层扫描前端设备、系统、方法及存储介质,旨在解决现有技术所存在的、采用步进式移动的单发射源所导致的重建图像质量不高的问题。
一方面,本发明提供了一种电子计算机断层扫描CT前端设备,所述设备包括:发射源,
所述发射源包括:若干呈预定排布方式排布且依次受控执行相应扫描动作 的发射单元,所述发射单元的出射方向经过扫描中心。
进一步的,所述发射单元呈弧形排布,所述弧形对应的圆心与所述扫描中心相对应。
进一步的,所述弧形对应的半径为10-150厘米,和/或,相邻两个所述发射单元之间弧形段所对应的中心角为5-50度。
进一步的,所述发射源包括15个所述发射单元,所述弧形对应的半径为65厘米,相邻两个所述发射单元之间弧形段所对应的中心角为5度。
进一步的,所述发射单元采用碳纳米管阴极,
所述发射源还包括:
基座;以及,
设置于所述基座上、用于承载所述发射单元并实现所述发射单元电性连通的导电条,所述导电条与所述发射单元之间通过螺杆相装配。
另一方面,本发明提供了一种CT系统,所述系统包括:如上述的CT前端设备以及工作站,
所述CT前端设备还包括:
探测器,用于当所述发射单元执行扫描动作时,获得相应投影图像,
所述工作站包括:存储器及处理器,所述处理器执行所述存储器中存储的计算机程序时实现如下步骤:
对各所述投影图像进行处理得到重建图像,并利用深度学习方法对所述重建图像中的病灶进行识别。
进一步的,利用深度学习方法对所述重建图像中的病灶进行识别,具体包括下述步骤:
对所述重建图像进行预处理,得到初始图像;
将所述初始图像输入至深度学习神经网络进行所述病灶的识别,得到识别结果,
其中,将所述初始图像输入至深度学习神经网络进行所述病灶的识别,具 体包括下述步骤:
对所述初始图像进行特征提取处理,得到卷积特征图像;
对所述卷积特征图像确定候选区域,相应得到全连接特征图;
基于所述全连接特征图进行分类,得到所述识别结果。
进一步的,对所述初始图像进行特征提取处理,得到卷积特征图像,具体为:
采用若干残差卷积神经网络对所述初始图像进行特征提取处理,
其中,所述残差卷积神经网络中包括卷积网络层、激活函数网络层及批量归一化网络层,
采用若干残差卷积神经网络对所述初始图像进行特征提取处理,具体包括下述步骤:
通过所述批量归一化网络层对输入的批量数据求均值;
根据所述均值求所述批量数据的方差;
根据所述均值及所述方差,对所述批量数据进行标准化处理,得到批量标准数据;
采用调整因子对所述批量标准数据进行处理,得到具有与输入的所述批量数据的分布相同或类似的批量调整数据以进行输出。
另一方面,本发明还提供了一种乳腺中病灶的识别方法,所述方法基于如上述的系统,所述投影图像为乳腺投影图像,所述识别结果指示乳腺中是否存在病灶,所述方法包括下述步骤:
对各所述投影图像进行处理得到重建图像;
利用深度学习方法对所述重建图像中的病灶进行识别。
另一方面,本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述方法中的步骤。
本发明中,CT前端设备的发射源包括:若干呈预定排布方式排布且依次受 控执行相应扫描动作的发射单元,所述发射单元的出射方向经过扫描中心。这样,所实现的多个发射单元可受控依次快速执行扫描动作,而无需单个发射单元采用步进式移动来采集不同投影角度的投影图像,从而节约了发射单元的移动时间,能在保证重建图像空间分辨率达到要求的同时,也可能快速进行扫描动作,并且避免了因步进式移动中急剧加减速而导致图像运动伪影的问题,进而在快速扫描的同时保障了重建图像质量。
附图说明
图1是本发明实施例一提供的CT前端设备的结构示意图;
图2是本发明实施例一中发射单元的第一种排布示意图;
图3是本发明实施例一中发射单元的第二种排布示意图;
图4是本发明实施例一中发射单元的第三种排布示意图;
图5是本发明实施例三中发射源的结构示意图;
图6是本发明实施例四提供的CT系统的结构示意图;
图7是本发明实施例五提供的工作站的处理流程图;
图8是本发明实施例五中深度学习神经网络的架构示意图;
图9是本发明实施例六中残差卷积神经网络的架构示意图;
图10是本发明实施例六中批量归一化网络层的处理流程图;
图11是本发明实施例七的乳腺病灶的识别方法的流程图;
图12是本发明一应用实例的深度学习神经网络的架构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
以下结合具体实施例对本发明的具体实现进行详细描述:
实施例一:
图1示出了本发明实施例一提供的电子计算机断层扫描(Computed Tomography,CT)前端设备,该CT前端设备主要用于采集人体相应部位的CT投影图像,例如:乳腺投影图像、脑投影图像、肝脏投影图像等,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
该CT前端设备包括:设备基体101,以及位于设备基体101上的发射源102、探测器103、控制电路板、显示器、操作台、网络模块等。其中,设备基体101具体可包括压迫板104;发射源102可以是X射线发射源、γ射线发射源等,发射源102可通过C型臂105进行承载及与其他部件电性连通;探测器103可以是平板探测器,可在发射源102的发射单元执行扫描动作时,获得相应的投影图像,平板探测器通常与承载台搭配设置,承载台上通常会设置起电磁屏蔽作用的碳纤维板;控制电路板起到整个设备的控制作用,控制电路板可采用分布式控制模式或总控模式,可控制其他构件的工作,控制电路板上可设置相应的处理器及存储器等;操作台用于供用户进行扫描、图像采集等操作控制;操作台也可以与显示器相组合形成触控显示屏;网络模块用于设备与工作站、云端等进行交互。发射源与探测器对应设置,当发射源102进行X射线或γ射线发射进行扫描操作(对于射线而言,扫描操作也可以叫做曝光操作)时,探测器103可探测得到对应的投影图像。
如图2所示,发射源102包括:若干呈预定排布方式排布且依次受控执行相应扫描动作的发射单元201,其中发射单元201的出射方向经过扫描中心A。在本实施例中,发射单元201可以是X射线发射单元、γ射线发射单元等。所称的预定排布方式不仅可如图2所示的各发射单元201呈弧形排布方式,还可以通过如图3所示的直线式排布、如图4所示的交错式排布等方式,排布方式只要是能满足快速扫描且探测器103能够获得质量能够得到保障的投影图像的要求均可。
上述CT前端设备的部分工作原理大致如下:
将多个发射单元201按照预定排布方式集成于发射源102一体,形成发射单元阵列。通过依次控制每个发射单元201的阴极电子发射的开关动作,不同位置的发射单元201阴极对应产生不同位置的X射线焦斑,探测器103可获得不同视角的投影图像,进而由各投影图像得到CT重建图像。
实施本实施例,所实现的多个发射单元201可受控依次快速执行扫描动作,而无需单个发射单元采用步进式移动来采集不同投影角度的投影图像,从而节约了发射单元的移动时间,能在保证重建图像空间分辨率达到要求的同时,也可能快速进行扫描动作,并且避免了因步进式移动中急剧加减速而导致图像运动伪影的问题,进而在快速扫描的同时保障了重建图像质量。
实施例二:
本实施例在实施例一基础上进一步提供了如下内容:
在本实施例中,仍如图2所示,发射单元201呈弧形排布,排布所呈弧形对应的圆心与扫描中心A相对应。这样,发射单元201的出射方向经过扫描中心A,且发射单元201排布所呈弧形对应的圆心与扫描中心A相对应,各发射单元201的设置相似且不同角度的成像效果一致。上述弧形对应的半径R可取10-150厘米,和/或,相邻两个发射单元201之间弧形段所对应的中心角Θ可取5-50度。发射单元201可均匀分布在上述弧形所对应的整个弧线上,当然也可以根据需要非均匀分布。在一个应用实例中,发射源102可包括15个发射单元201,弧形对应的半径为65厘米,相邻两个发射单元201之间弧形段所对应的中心角为5度,这些参数取值与所需成像的投影图像数量、投影角度、发射单元201所占空间体积等相关。
实施例三:
本实施例在实施例一或二基础上,进一步提供了如下内容:
如图5所示,发射单元201采用碳纳米管(Carbon Nanotube,CNT)阴极,发射源102还包括:
基座501;以及,
设置于基座501上、用于承载发射单元102并实现发射单元201电性连通的导电条502,导电条502与发射单元201之间通过螺杆相装配。
在本实施例中,CNT具有稳定的化学性质、极大的长径比等特性,是理想的场致发射材料。基于CNT阴极的X射线发射源102可以实现高时间分辨、可编程的X射线发射,具有微型化、低功耗、长寿命和快点火的特点。每个发射单元201为单独封装的玻璃球管,每一个玻璃球管包含:CNT阴极、栅极、聚焦极以及阳极靶材。其中,CNT阴极、栅极和聚焦极可设计成电子发射一体化结构,阳极靶材及导电基地加工在一起,进行玻璃球管封装时,只需要固定靶材和电子发射一体化结构,可以利用现有的热阴极球管封装工艺,有利于提高球管封装效率和成品率。此外为了防止二次电子聚集在玻璃球管上而引起打火,在电子发射一体化结构上,设计一个金属罩来屏蔽二次电子,减小玻璃球管打火概率。为了保证每个玻璃球管的安装固定并保证相互之间的间隔,设计了弧形导电条502预先安装在弧形基座501上,导电条502可采用铜材,在导电条502上每隔一定间隔(与上述图2中的中心角Θ对应)预留一个螺纹安装孔,嵌入加工好的螺杆,然后把玻璃球管安装在螺杆上,同时在弧形导电条502上预留阳极高压连接孔,即导电条502同时可起到导电和支撑固定作用。另外,玻璃球管到基座501的距离为60厘米左右(如上图2中R’所示),加上基座501上起到电磁屏蔽作用的碳纤维板厚度约5厘米,总体上发射单元201到到探测器103的距离设计为大约65厘米,与上述弧形半径R一致,使得弧形阵列上每个发射单元201到探测器103的中心距离一致。
实施例四:
图6示出了本发明实施例四提供的CT系统的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:如上述各实施例所述的CT前端设备601以及工作站602。工作站602与CT前端设备601之间可通过网络连接,或者工作站602与CT前端设备601集成设置于一个物理实体中,工作站602的功能可由相应的软件或硬件来实现。
在本实施例中,工作站602包括:存储器6021及处理器6022,处理器6022执行存储器6021中存储的计算机程序6023时实现如下方法中的步骤:
对各投影图像进行处理得到重建图像,并利用深度学习方法对重建图像中的病灶进行识别。在本实施例中,可采用合适的任何一种深度学习方法,实现对重建图像中病灶的识别,例如:区域卷积神经网络(Regions with Convolutional Neural Network,R-CNN)、快速区域卷积神经网络(Fast R-CNN)、多分类单杆检测器(Single Shot MultiBox Detector,SSD)等。
实施本实施例,利用医学影像的人工智能(Artificial Intelligence,AI)诊断,能辅助医生提升筛查效率,有效降低漏诊和误诊几率。以乳腺癌诊断为例,当前业内,乳腺癌诊断主要依赖医生进行判读,由于个人经验差异往往导致诊断结论的不一致性,即使对于同一位医生,一定的人为失误率也是无法避免的。不仅如此,对于乳房相对偏小的情况,腺体组织相对比较集中,也进一步提升了目视解读的难度,采用本实施例技术方案,对于乳腺癌等癌症的早筛早诊,作用更为明显。
实施例五:
本实施例在实施例四基础上,进一步提供了如下内容:
处理器6022执行存储器6021中存储的计算机程序6023时,具体实现如图7所示的方法中的步骤:
在步骤S701中,对各投影图像进行处理得到重建图像。本步骤所涉及的投影重建技术,是利用X射线、超声波等透过被扫描物体(如人体内脏、地下矿体)形成透视投影图,并根据透视投影图计算恢复物体的断层图,得到重建图像,该重建图像可以为若干切片图像。这种重建技术是基于通过X射线、超声波的扫描,由于在穿过被扫描物体的不同结构时的吸收不同,引起在成像面上投射强度的不同,反演求得被扫描物体内部结构分布的图像。
在步骤S702中,对重建图像进行预处理,得到初始图像。在本实施例中,预处理可涉及对图像的裁剪,以减少冗余计算。
在步骤S703中,将初始图像输入至深度学习神经网络进行病灶的识别,得到识别结果。在本实施例中,深度学习神经网络架构可如图8所示,具体包括:卷积子网络801、候选框子网络802及全连接子网络803。其中,每个子网络处理大致如下:
卷积子网络801,可对初始图像进行特征提取处理,得到卷积特征图像。在本实施例中,卷积子网络801可包括多段卷积神经网络,每段卷积神经网络可采用残差卷积神经网络以减缓梯度消失和梯度爆炸等问题,也可以采用非残差卷积神经网络,当然,卷积子网络801也可以采用非残差卷积神经网络与残差卷积神经网络的组合。
候选框子网络802,可对卷积特征图像确定候选区域,相应得到全连接特征图。在本实施例中,候选框子网络802可采用预定尺寸的滑动窗口,基于每一个滑动窗口的中心点,在初始图像上生成预定数量的、具有预定尺寸的候选框,每个候选框中心点与滑动窗口的中心点对应。对应可获得与每一个候选框对应的候选区域。每一个候选区域对应生成一候选区域特征图。候选区域特征图还可以相应进行区域池化处理,得到全连接特征图。
全连接子网络803,可基于全连接特征图进行分类等处理,得到识别结果,识别结果指示是否存在病灶。在本实施例中,可在全连接子网络803的两个分支分别进行相应的分类、回归等处理,相应全连接子网络803可对应包含分类网络层及回归网络层。分类网络层可用于根据全连接特征图判断候选区域是前景还是背景,也即判断候选区域中是否存在病灶,回归网络层可用于修正候选框的坐标,最终确定病灶所在位置。
实施本实施例,采用基于区域的卷积神经网络实现对病灶的识别,可提高识别的准确度,有效降低漏诊和误诊几率,有利于利用医学影像的AI人工智能诊断的应用推广。
实施例六:
本实施例在实施例五基础上,进一步提供了如下内容:
在卷积子网络801中,可采用若干残差卷积神经网络对初始图像进行特征提取处理,而残差卷积神经网络可包括如图9所示的多个网络层:卷积网络层901、激活函数网络层902及批量归一化网络层903。其中,每个网络层处理大致如下:
卷积网络层901可利用预设卷积核实现对输入图像进行卷积处理。
激活函数网络层902可利用S型(Sigmoid)函数、双曲正切(Tahn)函数或整流线性单元(The Rectified Linear Unit,ReLU)函数等进行激活处理。
批量归一化网络层903不仅能实现传统标准化处理,而且能使网络能够加速收敛,进一步减缓梯度消失和梯度爆炸的问题。在本实施例中,批量归一化网络层903的处理具体可包括如图10所示的步骤:
在步骤S1001中,对输入的、经由卷积网络层501处理所得的批量数据求均值。
在步骤S1002中,根据均值求批量数据的方差。
在步骤S1003中,根据均值及方差,对批量数据进行标准化处理,得到批量标准数据。
在步骤S1004中,采用调整因子对批量标准数据进行处理,得到具有与输入的批量数据的分布相同或类似的批量调整数据以进行输出。在本实施例中,调整因子在初始化时具有相应的初始值,然后基于该初始值,调整因子可以在反向传输中,与网络层处理的参数一同进行训练,使得调整因子能学习输入的批量数据的分布,输入的批量数据经过批量归一化处理后,仍保留原来输入的批量数据的分布。
实施例七:
本实施例在上述各实施例系统的基础上,进一步提供了一种乳腺病灶的识别方法,具体包括如图11所示的步骤:
在步骤S1101中,对各投影图像进行处理得到重建图像。
在步骤S1102中,利用深度学习方法对重建图像中的病灶进行识别。
其中,各步骤的内容可与上述各实施例中相应位置描述的内容类同,此处不再赘述。
实施例八:
在本发明实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中的步骤,例如,图11所示的步骤S1101至S1102。或者,该计算机程序被处理器执行时实现上述各系统实施例中所描述的功能,例如:上述深度学习神经网络的功能。
本发明实施例的计算机可读存储介质可以包括能够携带计算机程序代码的任何实体或装置、记录介质,例如,ROM/RAM、磁盘、光盘、闪存等存储器。
下面通过一个应用实例,对上述各实施例中所涉及的深度学习神经网络进行具体说明。
该深度学习神经网络可用于对乳腺中病灶(钙化灶)进行识别,具体可包括如图12所示的架构:
整个深度学习神经网络包括:卷积子网络801、候选框子网络802及全连接子网络803。
卷积子网络801包括:第一段卷积神经网络1201、池化层1202、第二段卷积神经网络1203、第三段卷积神经网络1204及第四段卷积神经网络1205。其中,第一段卷积神经网络1201采用非残差卷积神经网络,而第二段卷积神经网络1203、第三段卷积神经网络1204及第四段卷积神经网络1205采用残差卷积神经网络。残差卷积神经网络中包括多个网络层,仍如图9所示:卷积网络层901、激活函数网络层902及批量归一化网络层903。
候选框子网络802包括:区域候选网络(Region Proposal Network,RPN)1206及区域池化网络1207。
全连接子网络803包括:分类网络层1208及回归网络层1209。
在候选框子网络802与全连接子网络803之间还包括第五段卷积神经网络 1211。
第五段卷积神经网络1211之后还设定一掩膜网络层1210。
以上深度学习神经网络的处理过程大致如下述:
1、由各投影图像处理所得重建图像进行裁剪等预处理后,得到大小为224×224的初始图像。此处所称重建图像通常为切片图像。
2、初始图像输入到第一段卷积神经网络1201进行卷积计算的初始特征提取,所得到的特征图经池化层1202处理后,再输出至第二段卷积神经网络1203、第三段卷积神经网络1204及第四段卷积神经网络1205进行进一步的特征提取。第一段卷积神经网络1201进行卷积计算所采用的卷积核大小为7×7,步长为2,可使数据尺寸减半,第一段卷积神经网络1201输出的特征图尺寸为112×112。第一段卷积神经网络1201输出的特征图经池化层1202处理后,得到特征图尺寸为56×56。
所采用的残差卷积神经网络中的卷积网络层901可采用如下公式(1)进行计算:
Figure PCTCN2019071198-appb-000001
其中,i,j为输入图像的像素坐标位置,I为输入图像数据,K为卷积核,p,n分别为卷积核的宽和高,S(i,j)为输出的卷积数据。
批量归一化网络层903可进行如下计算:
首先,利用如下公式(2)对输入的批量数据求均值μβ。输入的批量数据β=x 1...m是卷积网络层901的输出数据。
Figure PCTCN2019071198-appb-000002
其中,m为数据总数。
其次,利用如下公式(3),根据均值求批量数据的方差σβ 2
Figure PCTCN2019071198-appb-000003
然后,利用如下公式(4),根据均值和方差,对批量数据进行标准化处理,得到批量标准数据
Figure PCTCN2019071198-appb-000004
Figure PCTCN2019071198-appb-000005
其中,∈为避免除数为零的微小正数。
接着,利用如下公式(5),采用调整因子α、ω对批量标准数据进行处理,得到具有与输入的批量数据的分布相同或类似的批量调整数据以进行输出,输出可作为下一激活函数网络层902的输入。
Figure PCTCN2019071198-appb-000006
其中,α为缩放因子,ω为平移因子,调整因子α、ω在初始化时具有相应的初始值,在本应用实例中,α的初始值约等于1、ω的初始值约等于0,然后基于该初始值,调整因子α、ω可以在反向传输中,与网络层处理的参数一同进行训练,从而,α、ω就学习了输入的批量数据的分布,输入的批量数据经过批量归一化处理后,仍保留原来输入的批量数据的分布。
激活函数网络层902可进行如下公式(6)所示的计算:
Figure PCTCN2019071198-appb-000007
其中,x为批量归一化网络层903的输出数据,f(x)为激活函数网络层902的输出。
上述的卷积网络层901、激活函数网络层902及批量归一化网络层903的三种操作可组成一个神经网络块。第二段卷积神经网络1203有3个神经网络块, 其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为64;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为64;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为256。第三段卷积神经网络1204有4个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为128;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为128;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为512。第四段卷积神经网络1205有23个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为256;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为256;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为1024。最终通过第一至第四段卷积神经网络,输出的卷积特征图像为14×14×1024,表示输出卷积特征图像大小为14×14,卷积核数量为1024。
3、经卷积子网络801处理所得的卷积特征图像随后输入至RPN1206、区域池化网络1207中进行相应处理。
RPN1206用于提取候选区域,具体是,采用预定尺寸3×3的滑动窗口,基于每一个滑动窗口的中心点,在初始图像上生成预定数量为9个、具有预定尺寸的候选框,每个候选框中心点与滑动窗口的中心点对应。对应可获得与每一个候选框对应的候选区域。每一个候选区域对应生成一候选区域特征图。由于通过第一至第四段卷积神经网络,输出的卷积特征图像为14×14×1024,滑动窗口预定尺寸为3×3,候选框预定数量为9个,那么,可相应得到256个候选区域,并相应得到256个候选区域特征图,即256维全连接特征。其中部分候选框的面积尺寸相同,该部分候选框的面积尺寸与其他部分候选框的面积尺寸不同,候选框的面积、长宽比可根据设定而得到的。
区域池化网络1207用于根据候选框的位置坐标,将候选区域特征图池化为固定尺寸的池化特征图。区域池化网络1207可选RoiAlign网络。候选框由回归模型得出,一般为浮点数,RoiAlign网络不对浮点数进行量化。对每个候选 框,将候选区域特征图分成7×7个单元,在每个单元中固定四个坐标位置,通过双线性内插法计算出四个位置的值,然后进行最大池化操作。对每个候选框,得到7×7×1024的池化特征图,所有池化特征图构成初始的全连接特征图。
4、初始的全连接特征图经过第五段卷积神经网络1211处理后,输出相应最终的7×7×2048的全连接特征图。第五段卷积神经网络1211有3个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为512;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为512;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为2048。
第五段卷积神经网络1211处理所得最终的全连接特征图进入全连接子网络803的三个分支:分类网络层1208、回归网络层1209及掩膜网络层1210。其中,分类网络层1208用于输入第五段卷积神经网络1211处理所得最终的全连接特征图,并以此判断候选区域是前景还是背景,输出为14×14×18的数组,其中“18”表示9个候选框会输出前景或背景两种结果。回归网络层1209用于预测候选框中心锚点的坐标、高与宽,修正候选框的坐标,输出为14×14×36,其中“36”表示9个候选框的四个端点值。掩膜网络层1210利用一定尺寸2×2的卷积核对相应确定为钙化灶并经过位置修正的候选区域特征图进行上采样,得到14×14×256的特征图,对该特征图进行后续的卷积处理,得到14×14×2的特征图,继而进行掩膜处理,对前景与背景进行分割。在本应用实例中,类别数量为2,表示有无乳腺钙化灶,另外,还可以进一步得到钙化灶位置。
其中,全连接子网络803中所用到的、用于对分类进行优化的分类网络层损失函数的计算如下公式(7)所示,用于当分类结果为存在钙化灶时、对回归进行优化的回归网络层损失函数的计算如下公式(8)所示。
L cls=-log q……公式(7)
其中,q为真实分类的概率。
Figure PCTCN2019071198-appb-000008
其中,b取值为(ti-ti’),ti为预测坐标,ti’为真实坐标。
而掩膜处理的优化处理可涉及:在分类处理时,经过激活函数Sigmoid处理后进行交叉熵的计算。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种电子计算机断层扫描CT前端设备,所述设备包括:发射源,其特征在于,所述发射源包括:若干呈预定排布方式排布且依次受控执行相应扫描动作的发射单元,所述发射单元的出射方向经过扫描中心。
  2. 如权利要求1所述的设备,其特征在于,所述发射单元呈弧形排布,所述弧形对应的圆心与所述扫描中心相对应。
  3. 如权利要求2所述的设备,其特征在于,所述弧形对应的半径为10-150厘米,和/或,相邻两个所述发射单元之间弧形段所对应的中心角为5-50度。
  4. 如权利要求3所述的设备,其特征在于,所述发射源包括15个所述发射单元,所述弧形对应的半径为65厘米,相邻两个所述发射单元之间弧形段所对应的中心角为5度。
  5. 如权利要求1所述的设备,其特征在于,所述发射单元采用碳纳米管阴极,
    所述发射源还包括:
    基座;以及,
    设置于所述基座上、用于承载所述发射单元并实现所述发射单元电性连通的导电条,所述导电条与所述发射单元之间通过螺杆相装配。
  6. 一种CT系统,其特征在于,所述系统包括:如权利要求1至5任一项所述的CT前端设备以及工作站,
    所述CT前端设备还包括:
    探测器,用于当所述发射单元执行扫描动作时,获得相应投影图像,
    所述工作站包括:存储器及处理器,所述处理器执行所述存储器中存储的计算机程序时实现如下步骤:
    对各所述投影图像进行处理得到重建图像;
    利用深度学习方法对所述重建图像中的病灶进行识别。
  7. 如权利要求6所述的系统,其特征在于,利用深度学习方法对所述重建 图像中的病灶进行识别,具体包括下述步骤:
    对所述重建图像进行预处理,得到初始图像;
    将所述初始图像输入至深度学习神经网络进行所述病灶的识别,得到识别结果,
    其中,将所述初始图像输入至深度学习神经网络进行所述病灶的识别,具体包括下述步骤:
    对所述初始图像进行特征提取处理,得到卷积特征图像;
    对所述卷积特征图像确定候选区域,相应得到全连接特征图;
    基于所述全连接特征图进行分类,得到所述识别结果。
  8. 如权利要求7所述的系统,其特征在于,对所述初始图像进行特征提取处理,得到卷积特征图像,具体为:
    采用若干残差卷积神经网络对所述初始图像进行特征提取处理,
    其中,所述残差卷积神经网络中包括卷积网络层、激活函数网络层及批量归一化网络层,
    采用若干残差卷积神经网络对所述初始图像进行特征提取处理,具体包括下述步骤,具体包括下述步骤:
    通过所述批量归一化网络层对输入的批量数据求均值;
    根据所述均值求所述批量数据的方差;
    根据所述均值及所述方差,对所述批量数据进行标准化处理,得到批量标准数据;
    采用调整因子对所述批量标准数据进行处理,得到具有与输入的所述批量数据的分布相同或类似的批量调整数据以进行输出。
  9. 一种乳腺病灶的识别方法,其特征在于,所述方法基于如权利要求6所述的系统,所述投影图像为乳腺投影图像,所述识别结果指示乳腺中是否存在病灶,所述方法包括下述步骤:
    对各所述投影图像进行处理得到重建图像;
    利用深度学习方法对所述重建图像中的病灶进行识别。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求9所述方法中的步骤。
PCT/CN2019/071198 2018-10-29 2019-01-10 电子计算机断层扫描前端设备、系统、方法及存储介质 WO2020087780A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811268175.5A CN109589127B (zh) 2018-10-29 2018-10-29 电子计算机断层扫描前端设备、系统、方法及存储介质
CN201811268175.5 2018-10-29

Publications (1)

Publication Number Publication Date
WO2020087780A1 true WO2020087780A1 (zh) 2020-05-07

Family

ID=65958590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/071198 WO2020087780A1 (zh) 2018-10-29 2019-01-10 电子计算机断层扫描前端设备、系统、方法及存储介质

Country Status (2)

Country Link
CN (1) CN109589127B (zh)
WO (1) WO2020087780A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109350097B (zh) * 2018-12-17 2021-11-05 深圳先进技术研究院 X射线源阵列、x射线断层扫描系统和方法
CN113520416A (zh) * 2020-04-21 2021-10-22 上海联影医疗科技股份有限公司 一种用于生成对象二维图像的方法和系统
CN112107324B (zh) * 2020-09-03 2024-04-26 上海联影医疗科技股份有限公司 数字乳腺断层摄影设备的扫描方法、介质及医疗设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102106740A (zh) * 2011-03-11 2011-06-29 河海大学 X射线复式断层扫描成像系统及方法
CN102697518A (zh) * 2012-06-25 2012-10-03 苏州生物医学工程技术研究所 静态能量分辨ct扫描仪及其扫描方法
CN106388848A (zh) * 2016-10-18 2017-02-15 深圳先进技术研究院 对ct图像进行预处理的方法、系统及静态ct成像装置
CN108257134A (zh) * 2017-12-21 2018-07-06 深圳大学 基于深度学习的鼻咽癌病灶自动分割方法和系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103948395A (zh) * 2007-07-19 2014-07-30 北卡罗来纳大学查珀尔希尔分校 固定 x 射线数字化断层合成或断层摄影系统和相关方法
US8045811B2 (en) * 2008-11-26 2011-10-25 Samplify Systems, Inc. Compression and storage of projection data in a computed tomography system
CN102551783A (zh) * 2012-02-16 2012-07-11 邓敏 手术用双模态实时成像装置、系统及其方法
CN103901057B (zh) * 2012-12-31 2019-04-30 同方威视技术股份有限公司 使用了分布式x射线源的物品检查装置
CN104465279B (zh) * 2013-09-18 2017-08-25 清华大学 X射线装置以及具有该x射线装置的ct设备
US10039505B2 (en) * 2014-07-22 2018-08-07 Samsung Electronics Co., Ltd. Anatomical imaging system having fixed gantry and rotating disc, with adjustable angle of tilt and increased structural integrity, and with improved power transmission and position sensing
CN105445290A (zh) * 2014-09-02 2016-03-30 同方威视技术股份有限公司 X射线产品质量在线检测装置
CN105997127A (zh) * 2016-06-21 2016-10-12 深圳先进技术研究院 一种静态乳腺双能ct成像系统及成像方法
CN106326931A (zh) * 2016-08-25 2017-01-11 南京信息工程大学 基于深度学习的乳腺钼靶图像自动分类方法
CN107545245A (zh) * 2017-08-14 2018-01-05 中国科学院半导体研究所 一种年龄估计方法及设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102106740A (zh) * 2011-03-11 2011-06-29 河海大学 X射线复式断层扫描成像系统及方法
CN102697518A (zh) * 2012-06-25 2012-10-03 苏州生物医学工程技术研究所 静态能量分辨ct扫描仪及其扫描方法
CN106388848A (zh) * 2016-10-18 2017-02-15 深圳先进技术研究院 对ct图像进行预处理的方法、系统及静态ct成像装置
CN108257134A (zh) * 2017-12-21 2018-07-06 深圳大学 基于深度学习的鼻咽癌病灶自动分割方法和系统

Also Published As

Publication number Publication date
CN109589127A (zh) 2019-04-09
CN109589127B (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
US10456101B2 (en) Breast tomosynthesis with flexible compression paddle
Highnam et al. Mammographic image analysis
WO2020087780A1 (zh) 电子计算机断层扫描前端设备、系统、方法及存储介质
JP5041821B2 (ja) 放射線医学的徴候の検出のためのトモシンセシス投影画像の処理装置
AU2003290665A1 (en) Apparatus and method for cone beam volume computed tomography breast imaging
US9036769B2 (en) Radio tomographic image generation method and device
US10631810B2 (en) Image processing device, radiation imaging system, image processing method, and image processing program
US20220164612A1 (en) Patient-Adaptive Nuclear Imaging
US20220313176A1 (en) Artificial Intelligence Training with Multiple Pulsed X-ray Source-in-motion Tomosynthesis Imaging System
US20160206266A1 (en) X-ray imaging apparatus and method for controlling the same
US20230064456A1 (en) Imaging systems and methods
Tan et al. XctNet: Reconstruction network of volumetric images from a single X-ray image
CN109658465A (zh) 图像重建过程中的数据处理、图像重建方法和装置
CN111526796B (zh) 用于图像散射校正的系统和方法
CN112150426B (zh) 基于非参数核密度估计的数字乳腺层析合成摄影重建方法
CN113796879B (zh) 一种球管出射能谱验证方法、装置、电子设备及存储介质
Jiang et al. Enhancement of 4-D cone-beam computed tomography (4D-CBCT) using a dual-encoder convolutional neural network (DeCNN)
CN113729747B (zh) 一种球形金属标记的锥束ct金属伪影去除系统及去除方法
WO2022033598A1 (zh) 乳房x射线图像获取方法、装置、计算机设备和存储介质
CN111754436B (zh) 医学图像伪影校正的加速方法、计算机设备和存储介质
CN113587810A (zh) 一种生成光源位置的方法和装置
US20240135605A1 (en) Methods and systems for metal artifacts correction
EP4369302A1 (en) Methods and systems for metal artifacts correction
Scaduto Clinically translating contrast-enhanced x-ray breast imaging
Pinto et al. Breast shape-specific subtraction for improved contrast enhanced mammography imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19878628

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19878628

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/11/2021)