WO2023221954A1 - Pancreatic tumor image segmentation method and system based on reinforcement learning and attention - Google Patents

Pancreatic tumor image segmentation method and system based on reinforcement learning and attention Download PDF

Info

Publication number
WO2023221954A1
WO2023221954A1 PCT/CN2023/094394 CN2023094394W WO2023221954A1 WO 2023221954 A1 WO2023221954 A1 WO 2023221954A1 CN 2023094394 W CN2023094394 W CN 2023094394W WO 2023221954 A1 WO2023221954 A1 WO 2023221954A1
Authority
WO
WIPO (PCT)
Prior art keywords
segmentation
layer
image
reinforcement learning
pancreatic
Prior art date
Application number
PCT/CN2023/094394
Other languages
French (fr)
Chinese (zh)
Inventor
李劲松
田雨
周天舒
董凯奇
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2023221954A1 publication Critical patent/WO2023221954A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present invention relates to the field of image segmentation, and in particular to a pancreatic tumor image segmentation method and system based on reinforcement learning and attention.
  • Computed Tomography has been widely used in cancer research, prevention, diagnosis and treatment, and is currently the main imaging diagnostic basis for the diagnosis and treatment of pancreatic cancer.
  • Fully automatic segmentation technology for pancreatic tumors can realize large-scale clinical CT image processing, improve patient diagnosis and treatment, and accelerate related clinical research, which is of great significance to families, society, and the national economy.
  • pancreatic tumor segmentation faces huge challenges.
  • the differences between pancreatic tumors and the pancreas and other organs around the abdomen are small in CT images, making it difficult to have clear boundaries.
  • the shape, size, and location of pancreatic tumors are not fixed and are highly complex.
  • the pancreas is a small abdominal organ, and pancreatic tumors are even smaller.
  • Traditional methods and general neural network methods cannot accurately locate the target area.
  • the existing pancreatic tumor segmentation still mainly relies on manual annotation by doctors.
  • the annotation process is boring and inefficient. More importantly, pancreatic annotation often requires doctors' rich experience. For doctors, the annotation work is a big challenge.
  • convolutional neural networks are also widely used in medical image segmentation.
  • the current mainstream segmentation method for three-dimensional images uses one or more layers of CT images as input, and uses a complex convolutional neural network to output predictions of the pancreatic region to achieve segmentation. Improve segmentation accuracy by learning prediction errors. Although certain results have been achieved, the neural network model segments these two-dimensional images independently, ignoring the intrinsic connections between these two-dimensional images, resulting in insufficient segmentation accuracy.
  • the three-dimensional neural network regards all slices as equally important, which will introduce a large amount of invalid information and interference information during segmentation.
  • the convolutional neural kernel due to the small field of view of the convolutional neural kernel, it is difficult to effectively utilize the information between slices in non-adjacent layers.
  • Existing medical image segmentation methods use a cascade method for segmentation.
  • a network is used for rough segmentation to obtain the ROI (region of interest) of the target area, and then a fine segmentation network is used for segmentation.
  • the fine segmentation network often takes the probability map generated by the coarse segmentation network as input, and the fine segmentation network is only responsible for optimizing the results of the coarse segmentation.
  • such a method will make the fine segmentation network unable to utilize information outside the ROI, amplify the prediction errors of the coarse segmentation network, and introduce a large number of false negatives. For small targets such as pancreatic tumors, the false negatives caused by the cascade method sexual issues will be more prominent.
  • the purpose of the present invention is to propose a pancreatic tumor image segmentation method and system based on reinforcement learning and attention in view of the shortcomings of the existing technology.
  • pancreatic tumor CT cannot utilize inter-layer information, and the three-dimensional volumetric volume Accumulative neural network will learn the problem of erroneous position and shape information between layers.
  • clinicians often judge the approximate shape and position of the pancreas and tumor based on certain key slices, and rely on several key slices to perform other tasks. Layer segmentation, this method is efficient and accurate.
  • the present invention proposes a method of using reinforcement learning method to simulate the behavior pattern of clinicians in the process of labeling tumors, focusing the attention of a CT image sequence on several key CT layers.
  • the inter-layer attention mechanism is used to flow information between layers to achieve accurate segmentation of pancreatic tumors.
  • the present invention provides a pancreatic tumor image segmentation method based on reinforcement learning and attention, which method includes the following steps:
  • Each reference layer corresponds to a cross-attention feature fusion module, which interacts with the segmentation layer.
  • the cross-attention feature fusion module unifies the feature dimensions of the reference layer and the segmentation layer, and then performs a splicing operation to perform the first step.
  • the first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second time.
  • Second fusion and use the residual operation to fuse the second fusion result with the information of the original segmentation layer features as the segmentation result;
  • the preprocessing process is specifically: adjust the voxel space distance of all data in the training set to 1mm; truncate the HU value of the image to between -100 and 240, and then normalize to 0 to 1.
  • the three-dimensional coarse segmentation model consists of two parts: encoding and decoding.
  • the encoding part includes four encoding blocks, each encoding block is connected to a downsampling layer; the decoding part includes four decoding blocks, each of which Each decoding block is preceded by an upsampling layer; each encoding block and decoding block consists of a varying number of convolutional-activation layers.
  • the ROI area image is recorded as Corresponds to the pancreatic CT image of the nth pancreatic cancer patient in the training set; Divide it into a 2D image along the z-axis, so Represents the 2D image of the kth layer after segmentation.
  • the label of the CT image after truncation is recorded as
  • the label corresponding to the pancreatic CT image of the nth pancreatic cancer patient in the training set is also divided into 2D images along the z-axis, so Represents the 2D image label corresponding to the k-th layer, where K RIO is the minimum layer number after truncation, and K ROI ′ is the maximum layer number after truncation.
  • the loss function used in the three-dimensional coarse segmentation model is the cross-entropy loss function Loss CE :
  • Y n is the pancreatic tumor segmentation label of the CT image
  • m is the number of pixels in the input image
  • y j and z j are the real label and predicted label of pixel j respectively
  • 0 , 1 and 2 respectively represent the background, pancreas or pancreatic tumor
  • function I( ⁇ ) is an indicative function
  • function log is a logarithmic function
  • p( ⁇ ) is the probability function predicted by the model.
  • the environment of the reinforcement learning network is the ROI area obtained from the original CT image
  • the state is the two-layer slice randomly selected along the z-axis
  • the action is that each iteration agent moves along the last selected reference layer.
  • the z-axis moves forward and backward, and each reference layer corresponds to an agent.
  • the action value function is the loss function of the prediction result of the two-dimensional fine segmentation model and the real label, and the maximum reward value of the next action in the current state is calculated through the heuristic function; in During the iterative process, the negative feedback method is used to train the reinforcement learning network.
  • the parameters of the reinforcement learning network are fixed, the reference layer is filtered using the reinforcement learning network, and the reference layer and segmentation layer are input into the two-dimensional fine segmentation model to complete the two-dimensional fine segmentation model training.
  • the implementation of the cross-attention feature fusion module is as follows:
  • the cross-attention feature fusion module first uses two linear mapping functions g( ⁇ ) and f( ⁇ ) to convert the input features from three-dimensional to one-dimensional, and transform the one-dimensional features to make the dimensions of the related features consistent;
  • Cat( ⁇ ) is the splicing operation along the channel direction
  • f′( ⁇ ) is a linear mapping function.
  • step (3) the two-dimensional precise segmentation model takes the segmentation layer and the reference layer as input, the prediction result of the segmentation layer as the output, and uses the loss function Dice Loss for negative feedback learning:
  • m' is the number of pixels in the input 2D image
  • y k' represents the label of the 2D image of the k'th layer after segmentation
  • y h and z h are respectively are the true labels and predicted labels of pixel h.
  • the present invention also provides a pancreatic tumor image segmentation system based on reinforcement learning and attention.
  • the system includes a pancreatic tumor segmentation training set building module, a three-dimensional rough segmentation model module, a reinforcement learning network module and a two-dimensional fine segmentation module. model module;
  • the pancreatic tumor segmentation training set building module is used to collect and preprocess pancreatic CT images of patients with pancreatic cancer, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
  • the three-dimensional rough segmentation model module is used to obtain the pancreas ROI region of interest, and segment the image of the ROI region and its label along the z-axis into 2D images;
  • the reinforcement learning network module is used to select two reference layers from the 2D image segmented by the three-dimensional coarse segmentation model module;
  • the two-dimensional fine segmentation model module is used to divide the data and labels of the training set into 2D images along the z-axis, and select a segmentation layer.
  • the two-dimensional fine segmentation model module includes two cross-attention feature fusion sub-modules, Corresponding to two reference layers respectively, the two cross-attention feature fusion sub-modules interact with the segmentation layer respectively to unify the feature dimensions of the reference layer and the segmentation layer, and then perform a splicing operation to perform the first fusion.
  • the first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second fusion, and used
  • the residual operation fuses the second fusion result with the information of the original segmentation layer features to obtain the tumor segmentation result.
  • Figure 1 is a flow chart of a pancreatic tumor image segmentation method based on reinforcement learning and attention provided by the present invention.
  • Figure 2 is a schematic diagram of the cross-attention feature fusion module of the present invention.
  • Figure 3 is a schematic structural diagram of the coarse segmentation model 3D UNet of the present invention.
  • Figure 4 is a schematic structural diagram of the precise segmentation model 2D UNet of the present invention.
  • Figure 5 is a flow chart of reinforcement learning training of the present invention.
  • Figure 6 is a schematic diagram of a pancreatic tumor image segmentation system based on reinforcement learning and attention provided by the present invention.
  • the present invention provides a pancreatic tumor segmentation method based on reinforcement learning and attention.
  • the implementation steps are as follows:
  • N val ⁇ , N N train +N val , where N train is the number of training sets, N val is the number of test sets, X n is the pancreatic CT image of the nth pancreatic cancer patient in T train , Y n is the pancreatic tumor segmentation label of the corresponding CT image, X n' is the pancreatic CT image of the n'th pancreatic cancer patient in T val , and Y n' is the pancreatic tumor segmentation label of the corresponding CT image.
  • the HU value of the image is truncated between [-100, 240], and then normalized to between [0, 1].
  • the HU value is the CT value, which is a unit of measurement for measuring the density of a certain local tissue or organ in the human body. It is usually called Hounsfield unit (HU). Air is -1000, and dense bone is +1000.
  • 3D UNet network for coarse segmentation of pancreatic CT is constructed, denoted as 3D coarse segmentation model M c .
  • This model consists of two parts: encoding and decoding.
  • the encoding part includes four encoding blocks, each Each coding block is followed by a downsampling layer.
  • the decoding part consists of four decoding blocks, each of which is preceded by an upsampling layer.
  • Each encoding block and decoding block consists of an varying number of convolutional-activation layers.
  • the network is trained using training set samples, and the loss function used is the cross-entropy loss function Loss CE :
  • 0, 1, 2 respectively represent the background, pancreas or Pancreatic tumor
  • function I( ⁇ ) is an indicator function
  • function log is a logarithmic function
  • p( ⁇ ) is a probability function predicted by the network.
  • the training set data X n and Y n are arranged according to the 3D ROI area Slice in the same way, so
  • X n (x k′ , k' ⁇ [1, K])
  • x k′ represents the 2D image of the k′th layer after segmentation
  • Y n (y k′ , k′ ⁇ [1, K])
  • y k' represents the label of the 2D image of the k'th layer after segmentation.
  • cross-attention feature fusion module (3.2) Implementation of cross-attention feature fusion module.
  • the present invention designs two inter-layer information interaction modules based on the cross-attention mechanism so that inter-layer information can interact in the reference layer and segmentation layer.
  • the two cross-attention feature fusion modules are completely consistent.
  • the cross-attention feature fusion module first uses two linear mapping functions g( ⁇ ) and f( ⁇ ) to convert the input features from three-dimensional to one-dimensional, and transform the one-dimensional features to make the dimensions of the related features consistent.
  • Cat( ⁇ ) is the splicing operation along the channel direction.
  • W q , W k , and W v are three convolutions used to give each feature adaptive weights.
  • sig( ⁇ ) is the sigmoid function.
  • f′( ⁇ ) is a linear mapping function.
  • m' is the number of pixels in the input 2D image
  • y h and z h are the real label and predicted label of pixel h respectively.
  • the reinforcement learning network consists of a 3D ResNet network whose output is a vector that maps to the agent's action space.
  • the entire reinforcement learning framework can be divided into the following parts: Environment, Agents, States, Action, Reword and Loss function. This invention explains the meaning of each part and the process of reinforcement learning:
  • Agent In order to select the reference layer a-th layer and layer b This invention sets up two agents Agent 1 and Agent 2 .
  • s (t) is defined at the iteration number t from The initial state of the two reference layers layer a and layer b selected by the reinforcement learning network is from Two slices randomly selected along the z-axis.
  • Action The action strategy function of Agent 1 and Agent 2 is ⁇ (act (t)
  • the present invention chooses a greedy strategy and traverses all actions in the action space.
  • st (t) and act ( t) are the state and the action of the current agent respectively.
  • Each action represents the last choice of Agent 1 and Agent 2 in each iteration.
  • the reference layer moves forward and backward along the z-axis.
  • the final Stop operation indicates the termination of Q selection, indicating that Agent 1 and Agent 2 cannot find a reference layer that can be upgraded.
  • Action value function This invention uses the set of prediction results of all 2D volume data layers of a CT image X n in the two-dimensional fine segmentation model M F Dice loss with the real label Y is expressed as:
  • Heuristic function The heuristic function is used to calculate the maximum reward value of the next action in the current state:
  • ⁇ [0, 1] is the attenuation coefficient. The more actions, the smaller the benefit.
  • Loss function During the iteration process, the negative feedback method is used to train the reinforcement learning network, so that agents Agent 1 and Agent 2 can quickly and accurately find the most appropriate reference layer.
  • the loss function of the t-th iteration can be expressed as:
  • the reinforcement learning network makes agents Agent 1 and Agent 2 learn from the environment Select two reference layers, layer a and layer b Denote it as state s (t) .
  • the parameters of the reinforcement learning network are fixed.
  • the present invention also provides a pancreatic tumor image segmentation system based on reinforcement learning and attention.
  • the system includes a pancreatic tumor segmentation training set building module, a three-dimensional coarse segmentation model module, and a reinforcement learning network. module and two-dimensional fine segmentation model module;
  • the pancreatic tumor segmentation training set building module is used to collect and preprocess pancreatic CT images of patients with pancreatic cancer, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
  • the three-dimensional rough segmentation model module is used to obtain the pancreas ROI region of interest, and segment the image of the ROI region and its label along the z-axis into 2D images;
  • the reinforcement learning network module is used to select two reference layers from the 2D image segmented by the three-dimensional coarse segmentation model module;
  • the two-dimensional fine segmentation model module is used to divide the data and labels of the training set into 2D images along the z-axis, and select a segmentation layer.
  • the two-dimensional fine segmentation model module includes two cross-attention feature fusion sub-modules, Corresponding to two reference layers respectively, the two cross-attention feature fusion sub-modules interact with the segmentation layer respectively to unify the feature dimensions of the reference layer and the segmentation layer, and then perform a splicing operation to perform the first fusion.
  • the first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second fusion, and used
  • the residual operation fuses the second fusion result with the information of the original segmentation layer features to obtain the segmentation result of pancreatic tumors.
  • This example uses the CT image data of the public dataset Medical Segmentation Decathlon (MSD) pancreatic tumor segmentation dataset for research. There are 281 cases of pancreatic tumor data in the MSD data set.
  • MSD Medical Segmentation Decathlon
  • This invention divides the data into a training set of 224 cases and a test set of 57 cases.
  • the data of the training set is used to train the three-dimensional coarse segmentation model M c , the reinforcement learning network Q and the two-dimensional fine segmentation model M F , and the test set is used to test the performance of the model.
  • This invention uses DSC coefficient, Jaccard coefficient, accuracy Precision and recall rate Recall to evaluate 2D UNet and 3D UNet networks.
  • the present invention adds a simulation process that removes the reinforcement learning network, randomly selects a reference layer from the ROI, and compares it with the present invention. The results are shown in Table 1.
  • pancreatic tumor image segmentation method based on reinforcement learning and attention achieved the best results compared with other method strategies.
  • the introduction of the reference layer and cross attention can enhance the recognition and positioning of segmentation targets by the 2D network, while avoiding the 3D network from introducing too much redundant information, causing segmentation difficulties.
  • the reinforcement learning method can better reduce the spread and accumulation of false false labels during the model training process (the accuracy rate increases by 8.67%).
  • the present invention achieves the best results in pancreatic tumor segmentation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Disclosed are a pancreatic tumor image segmentation method and system based on reinforcement learning and attention. The method comprises: extracting an ROI region by using a three-dimensional coarse segmentation model, and segmenting an ROI region image and an original image into 2D images along a z-axis; selecting two reference layers from the segmented ROI region image by using a reinforcement learning network, selecting a segmentation layer from the segmented original image, and jointly inputting the two reference layer and the segmentation layer into a two-dimensional fine segmentation model having cross-attention feature fusion modules; and using the cross-attention feature fusion modules between the layers to enable segmentation features to exchange information in the segmentation layer and the reference layers, to obtain a segmentation result of a pancreatic tumor. According to the present invention, related information of non-adjacent 2D images is learned using a cross-attention mechanism, so that not only the limitation that a 2D neural network cannot use interlayer information to accurately locate tumors, but also the problem in 3D neural network of inaccurate tumor segmentation of caused by redundancy and interference of 3D data information are avoided.

Description

基于强化学习和注意力的胰腺肿瘤图像分割方法及系统Pancreatic tumor image segmentation method and system based on reinforcement learning and attention 技术领域Technical field
本发明涉及图像分割领域,尤其涉及一种基于强化学习和注意力的胰腺肿瘤图像分割方法及系统。The present invention relates to the field of image segmentation, and in particular to a pancreatic tumor image segmentation method and system based on reinforcement learning and attention.
背景技术Background technique
胰腺癌确诊后的五年生存率约10%,是预后最差的恶性肿瘤之一。计算机断层扫描(Computed Tomography,CT)已广泛应用于癌症的研究、预防、诊断和治疗中,是目前胰腺癌诊断和治疗的主要影像诊断依据。胰腺肿瘤的全自动分割技术可以实现大规模临床CT图像处理,提高病人诊治水平、加快相关临床研究,对家庭、社会和国民经济有重要意义。The five-year survival rate after diagnosis of pancreatic cancer is about 10%, making it one of the malignant tumors with the worst prognosis. Computed Tomography (CT) has been widely used in cancer research, prevention, diagnosis and treatment, and is currently the main imaging diagnostic basis for the diagnosis and treatment of pancreatic cancer. Fully automatic segmentation technology for pancreatic tumors can realize large-scale clinical CT image processing, improve patient diagnosis and treatment, and accelerate related clinical research, which is of great significance to families, society, and the national economy.
胰腺和胰腺肿瘤在CT图像中的自动分割面临着巨大挑战,一方面,胰腺肿瘤与胰腺,以及腹部周围其他器官在CT影像中差异较小,难有明确的界限。另一方面,胰腺肿瘤的形状、大小、位置并不固定,有较高的复杂性。并且,胰腺属于腹部小器官,而胰腺肿瘤更小,通过传统方法以及一般的神经网络方法并不能够准确的对目标区域进行定位。现有的胰腺肿瘤分割还是主要依靠医生进行手动标注,标注过程枯燥低效,更重要的是胰腺标注经常需要医生丰富的经验,对于医生而言,标注工作是一个不小的挑战。The automatic segmentation of the pancreas and pancreatic tumors in CT images faces huge challenges. On the one hand, the differences between pancreatic tumors and the pancreas and other organs around the abdomen are small in CT images, making it difficult to have clear boundaries. On the other hand, the shape, size, and location of pancreatic tumors are not fixed and are highly complex. Moreover, the pancreas is a small abdominal organ, and pancreatic tumors are even smaller. Traditional methods and general neural network methods cannot accurately locate the target area. The existing pancreatic tumor segmentation still mainly relies on manual annotation by doctors. The annotation process is boring and inefficient. More importantly, pancreatic annotation often requires doctors' rich experience. For doctors, the annotation work is a big challenge.
对于CT胰腺肿瘤的分割算法开发难点,主要在以下几个方面:The difficulties in developing segmentation algorithms for CT pancreatic tumors mainly lie in the following aspects:
1.随着卷积神经网络的在图像处理上的广泛应用,卷积神经网络也被广泛用于医学图像分割。当前主流的针对三维图像的分割方法,是利用一层或多层CT图像作为输入,通过一个复杂的卷积神经网络,输出对胰腺区域的预测,来实现分割。通过学习预测的错误,来提高分割的准确率。尽管取得了一定的成果,但是神经网络模型是独立地对这些二维图像进行分割,忽略了这些二维图像之间的内在联系,从而导致分割精度不够。1. With the widespread application of convolutional neural networks in image processing, convolutional neural networks are also widely used in medical image segmentation. The current mainstream segmentation method for three-dimensional images uses one or more layers of CT images as input, and uses a complex convolutional neural network to output predictions of the pancreatic region to achieve segmentation. Improve segmentation accuracy by learning prediction errors. Although certain results have been achieved, the neural network model segments these two-dimensional images independently, ignoring the intrinsic connections between these two-dimensional images, resulting in insufficient segmentation accuracy.
2.直接用三维神经网络进行分割时虽然,相邻层之间的信息虽然更容易利用,但是三维神经网络把所有的切片看做同等重要,在分割时会引入大量的无效信息和干扰信息。另外,由于卷积神经核的视野范围小,不相邻层的切片间的信息很难被有效的利用。2. Although the information between adjacent layers is easier to use when directly using a three-dimensional neural network for segmentation, the three-dimensional neural network regards all slices as equally important, which will introduce a large amount of invalid information and interference information during segmentation. In addition, due to the small field of view of the convolutional neural kernel, it is difficult to effectively utilize the information between slices in non-adjacent layers.
现有的医学图像分割方法在进行采用级联的方法进行分割,既先用一个网络进行粗分割得到目标区域的感兴趣区域ROI(region of interest),然后再用一个精分割网络进行分割。精分割网络往往会将粗分割网络产生的概率图作为输入,精分割网络只负责对粗分割的结果进行优化。但是这样的方法会使得精分割网络无法利用ROI之外的信息,会放大粗分割网络预测错误的地方,引入大量的假阴性。对于胰腺肿瘤这种小目标来说,级联方法造成的假阴 性问题会更为突出。Existing medical image segmentation methods use a cascade method for segmentation. First, a network is used for rough segmentation to obtain the ROI (region of interest) of the target area, and then a fine segmentation network is used for segmentation. The fine segmentation network often takes the probability map generated by the coarse segmentation network as input, and the fine segmentation network is only responsible for optimizing the results of the coarse segmentation. However, such a method will make the fine segmentation network unable to utilize information outside the ROI, amplify the prediction errors of the coarse segmentation network, and introduce a large number of false negatives. For small targets such as pancreatic tumors, the false negatives caused by the cascade method Sexual issues will be more prominent.
发明内容Contents of the invention
本发明目的在于针对现有技术的不足,提出一种基于强化学习和注意力的胰腺肿瘤图像分割方法及系统,针对现有二维卷积神经网络胰腺肿瘤CT无法利用层间信息,而三维卷积神经网络会学习到层间错误的位置和形状信息的问题,临床医生在标注时,往往会依据某几张关键切片判断胰腺和肿瘤的大概形状和位置,并依靠几张关键的切片进行其他层的分割,这种方法高效且准确。针对二维和三维网络中存在的问题,本发明提出一种利用强化学习方法模拟临床医生在标注肿瘤过程中的行为模式,将一个CT影像序列的注意力集中在几张关键CT层上。其次,为了避免级联网络造成的假阴性问题,利用层间注意力机制进行层间信息的流动,来实现胰腺肿瘤的准确分割。The purpose of the present invention is to propose a pancreatic tumor image segmentation method and system based on reinforcement learning and attention in view of the shortcomings of the existing technology. In view of the fact that the existing two-dimensional convolutional neural network pancreatic tumor CT cannot utilize inter-layer information, and the three-dimensional volumetric volume Accumulative neural network will learn the problem of erroneous position and shape information between layers. When labeling, clinicians often judge the approximate shape and position of the pancreas and tumor based on certain key slices, and rely on several key slices to perform other tasks. Layer segmentation, this method is efficient and accurate. In view of the problems existing in two-dimensional and three-dimensional networks, the present invention proposes a method of using reinforcement learning method to simulate the behavior pattern of clinicians in the process of labeling tumors, focusing the attention of a CT image sequence on several key CT layers. Secondly, in order to avoid the false negative problem caused by cascade networks, the inter-layer attention mechanism is used to flow information between layers to achieve accurate segmentation of pancreatic tumors.
本发明的目的是通过以下技术方案来实现的:一方面,本发明提供了一种基于强化学习和注意力的胰腺肿瘤图像分割方法,该方法包括以下步骤:The object of the present invention is achieved through the following technical solutions: On the one hand, the present invention provides a pancreatic tumor image segmentation method based on reinforcement learning and attention, which method includes the following steps:
(1)采集胰腺癌患者胰腺CT图像并进行预处理,勾画CT图像胰腺肿瘤分割的标签,构建胰腺肿瘤分割训练集;(1) Collect pancreatic CT images of patients with pancreatic cancer and perform preprocessing, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
(2)构建用于胰腺CT粗分割的三维粗分割模型,获取获取胰腺感兴趣ROI区域,并将ROI区域的图像及其标签沿着z轴切分为2D图像;(2) Construct a three-dimensional rough segmentation model for pancreatic CT rough segmentation, obtain the pancreatic ROI region of interest, and segment the image and label of the ROI region into 2D images along the z-axis;
(3)构建带有交叉注意力特征融合模块的二维精分割模型,利用层间的交叉注意力特征融合模块,使得分割特征在分割层和参考层中进行信息的交互;(3) Construct a two-dimensional precise segmentation model with a cross-attention feature fusion module, and use the cross-attention feature fusion module between layers to enable segmentation features to interact with information in the segmentation layer and the reference layer;
(3.1)将训练集的数据和标签按照和步骤(2)中ROI区域的图像同样的方式沿着z轴切分为2D图像,随机选取步骤(2)中切分后的两个2D图像作为参考层,训练集数据切分后的2D图像作为分割层;利用强化学习网络选取胰腺肿瘤的参考层;(3.1) Split the data and labels of the training set into 2D images along the z-axis in the same way as the image of the ROI area in step (2), and randomly select the two 2D images segmented in step (2) as Reference layer, the 2D image after segmentation of the training set data is used as the segmentation layer; the reinforcement learning network is used to select the reference layer of pancreatic tumors;
(3.2)每个参考层对应一个交叉注意力特征融合模块,分别与分割层进行信息的交互,交叉注意力特征融合模块将参考层和分割层的特征维度统一,然后进行拼接操作,进行第一次融合,第一次融合结果与特征维度统一后的分割层特征进行点积操作,生成交叉注意力机制的信息相关矩阵,然后再与特征维度统一后的分割层特征进行点积操作进行第二次融合,并用残差操作将第二次融合结果与原始分割层特征的信息进行融合,作为分割结果;(3.2) Each reference layer corresponds to a cross-attention feature fusion module, which interacts with the segmentation layer. The cross-attention feature fusion module unifies the feature dimensions of the reference layer and the segmentation layer, and then performs a splicing operation to perform the first step. For the second fusion, the first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second time. Second fusion, and use the residual operation to fuse the second fusion result with the information of the original segmentation layer features as the segmentation result;
(4)给定一个待分割的胰腺肿瘤图像并进行预处理后,输入到三维粗分割模型中获取ROI区域并进行切分,利用强化学习网络选取参考层,将待分割的胰腺肿瘤图像切分后选择一个分割层,将分割层和参考层输入到二维精分割模型进行分割,得到肿瘤的分割结果。(4) Given a pancreatic tumor image to be segmented and preprocessed, input it into the three-dimensional rough segmentation model to obtain the ROI area and segment it, use the reinforcement learning network to select the reference layer, and segment the pancreatic tumor image to be segmented. Then select a segmentation layer, input the segmentation layer and reference layer into the two-dimensional fine segmentation model for segmentation, and obtain the tumor segmentation result.
进一步地,步骤(1)中,预处理过程具体为:将训练集中所有数据的体素空间距离space调整到1mm;将图像的HU值截断到-100至240之间,然后归一化到0到1之间。 Further, in step (1), the preprocessing process is specifically: adjust the voxel space distance of all data in the training set to 1mm; truncate the HU value of the image to between -100 and 240, and then normalize to 0 to 1.
进一步地,步骤(2)中,三维粗分割模型由编码和解码两部分组成,编码部分包括四个编码块,每个编码块后面均连接一个下采样层;解码部分包括四个解码块,每个解码块前面连接一个上采样层;每个编码块和解码块均由数目不等的卷积-激活层组成。Further, in step (2), the three-dimensional coarse segmentation model consists of two parts: encoding and decoding. The encoding part includes four encoding blocks, each encoding block is connected to a downsampling layer; the decoding part includes four decoding blocks, each of which Each decoding block is preceded by an upsampling layer; each encoding block and decoding block consists of a varying number of convolutional-activation layers.
进一步地,步骤(2)中,将ROI区域图像记为对应训练集中第n个胰腺癌患者胰腺CT图像;将沿着z轴切分为2D图像,于是 表示切分后第k层的2D图像,截断后CT图像的标签记为对应训练集中第n个胰腺癌患者胰腺CT图像的标签,同样沿着z轴切分为2D图像,于是 表示对应第k层的2D图像标签,其中KRIO为截断后的最小层号,KROI′为截断后的最大层号。Further, in step (2), the ROI area image is recorded as Corresponds to the pancreatic CT image of the nth pancreatic cancer patient in the training set; Divide it into a 2D image along the z-axis, so Represents the 2D image of the kth layer after segmentation. The label of the CT image after truncation is recorded as The label corresponding to the pancreatic CT image of the nth pancreatic cancer patient in the training set is also divided into 2D images along the z-axis, so Represents the 2D image label corresponding to the k-th layer, where K RIO is the minimum layer number after truncation, and K ROI ′ is the maximum layer number after truncation.
进一步地,步骤(2)中,三维粗分割模型采用的损失函数为交叉熵损失函数LossCE
Further, in step (2), the loss function used in the three-dimensional coarse segmentation model is the cross-entropy loss function Loss CE :
其中,代表网络输出的预测粗分割结果,Yn为CT图像胰腺肿瘤分割标签,m是输入图像中的像素点个数,yj和zj分别是像素点j的真实标签和预测标签,τ=0,1,2分别代表背景,胰腺或胰腺肿瘤;函数I(·)是示性函数,函数log为对数函数,p(·)为模型预测的概率函数。in, Represents the predicted rough segmentation result output by the network, Y n is the pancreatic tumor segmentation label of the CT image, m is the number of pixels in the input image, y j and z j are the real label and predicted label of pixel j respectively, τ = 0 , 1 and 2 respectively represent the background, pancreas or pancreatic tumor; function I(·) is an indicative function, function log is a logarithmic function, and p(·) is the probability function predicted by the model.
进一步地,步骤(3.1)中,强化学习网络的环境为从原始CT图像获取的ROI区域,状态为沿z轴随机选取的两层切片,动作为每次迭代代理在上次选择的参考层沿z轴进行前后移动,每个参考层对应一个代理,动作价值函数为二维精分割模型预测结果与真实标签的损失函数,并通过启发函数计算当前状态下的下一步动作的最大奖励值;在迭代过程中,利用负反馈的方法对强化学习网络进行训练。Further, in step (3.1), the environment of the reinforcement learning network is the ROI area obtained from the original CT image, the state is the two-layer slice randomly selected along the z-axis, and the action is that each iteration agent moves along the last selected reference layer. The z-axis moves forward and backward, and each reference layer corresponds to an agent. The action value function is the loss function of the prediction result of the two-dimensional fine segmentation model and the real label, and the maximum reward value of the next action in the current state is calculated through the heuristic function; in During the iterative process, the negative feedback method is used to train the reinforcement learning network.
进一步地,强化学习网络训练后,固定强化学习网络的参数,利用强化学习网络筛选参考层将参考层与分割层输入到二维精分割模型完成二维精分割模型训练。Further, after the reinforcement learning network is trained, the parameters of the reinforcement learning network are fixed, the reference layer is filtered using the reinforcement learning network, and the reference layer and segmentation layer are input into the two-dimensional fine segmentation model to complete the two-dimensional fine segmentation model training.
进一步地,步骤(3.2)中,将两个参考层分别记为将分割层记为xk’=c,xk’切分后第k’层的2D图像,对于参考层和分割层xk’=c的交互过程以及参考层和分割层xk’=c的交互过程一致;对于参考层和分割层xk’=c而言,交叉注意力特征融合模块的实现如下:Further, in step (3.2), the two reference layers are recorded as and The segmentation layer is recorded as x k'=c , and the 2D image of the k'th layer after x k' is segmented. For the reference layer The interaction process with the segmentation layer x k'=c and the reference layer It is consistent with the interaction process of the segmentation layer x k'=c ; for the reference layer For the segmentation layer x k'=c , the implementation of the cross-attention feature fusion module is as follows:
将参考层和分割层xk’=c经过下采样和多次卷积操作后分别得到高维度特征Fk=a和Fk’=c;Fk=a和Fk’=c作为交叉注意力特征融合模块的输入; Place the reference layer and segmentation layer x k'=c, after downsampling and multiple convolution operations, high-dimensional features F k=a and F k'=c are obtained respectively; F k=a and F k'=c are used as cross-attention feature fusion module input;
交叉注意力特征融合模块首先利用两个线性映射函数g(·)和f(·),将输入特征由三维转为一维,并将一维特征进行维度变换,使相关特征的维度保持一致;通过g(·)和f(·)对特征Fk=a和Fk’=c进行映射操作,使得特征的维度统一:
Fk=a′=g(Fk=a)
Fk’=c′=f(Fk’=c)
The cross-attention feature fusion module first uses two linear mapping functions g(·) and f(· ) to convert the input features from three-dimensional to one-dimensional, and transform the one-dimensional features to make the dimensions of the related features consistent; The features F k=a and F k'=c are mapped through g(·) and f(·) to make the dimensions of the features unified:
F k=a ′=g(F k=a )
F k'=c ′=f(F k'=c )
将Fk=a′与Fk’=c′并行,再使用一个卷积核W1进行映射操作,对二者进行第一次融合,融合后的特征作为参考特征:
Fk=a||k’=c=W1Cat(Fk=a′||Fk’=c′)
Parallel F k=a ′ and F k′=c ′, and then use a convolution kernel W 1 for mapping operation, and fuse the two for the first time, and the fused features are used as reference features:
F k=a||k'=c =W 1 Cat(F k=a ′||F k′=c ′)
Cat(·)为沿通道方向的拼接操作;Cat(·) is the splicing operation along the channel direction;
利用Fk=a||k’=c和Fk’=c′进行点积操作,生成交叉注意力机制的信息相关矩阵A:
q=Wq(Fk=a||k’=c),k=Wk(Fk’=c′),v=Wv(Fk’=c′)
Use F k=a||k'=c and F k'=c ′ to perform dot product operations to generate the information correlation matrix A of the cross-attention mechanism:
q=W q (F k=a||k'=c ), k=W k (F k'=c ′), v=W v (F k'=c ′)
其中,Wq,Wk,Wv是三个卷积用于给与各个特征自适应的权重;sig(·)为sigmoid函数;D为特征Fk’=c′的通道数量;Among them, W q , W k , W v are three convolutions used to give each feature adaptive weight; sig(·) is the sigmoid function; D is the number of channels of the feature F k'=c ';
信息相关矩阵A与v进行点积操作,完成第二次融合,并用残差操作将Fk’=c的信息融合到v′:
v′=Av+f′(Fk’=c)
The information correlation matrix A and v perform a dot product operation to complete the second fusion, and the residual operation is used to fuse the information of F k'=c to v':
v'=Av+f'(F k'=c )
其中,f′(·)为线性映射函数。Among them, f′(·) is a linear mapping function.
进一步地,步骤(3)中,二维精分割模型将分割层和参考层作为输入,分割层的预测结果作为输出,采用损失函数Dice Loss进行负反馈学习:
Further, in step (3), the two-dimensional precise segmentation model takes the segmentation layer and the reference layer as input, the prediction result of the segmentation layer as the output, and uses the loss function Dice Loss for negative feedback learning:
其中,m’是输入2D图像中的像素点个数,yk’表示切分后第k’层的2D图像的标签,yk’=c为分割层的预测结果,yh和zh分别是像素点h的真实标签和预测标签。Among them, m' is the number of pixels in the input 2D image, y k' represents the label of the 2D image of the k'th layer after segmentation, y k'=c is the prediction result of the segmentation layer, y h and z h are respectively are the true labels and predicted labels of pixel h.
另一方面,本发明还提供了一种基于强化学习和注意力的胰腺肿瘤图像分割系统,该系统包括胰腺肿瘤分割训练集构建模块、三维粗分割模型模块、强化学习网络模块和二维精分割模型模块; On the other hand, the present invention also provides a pancreatic tumor image segmentation system based on reinforcement learning and attention. The system includes a pancreatic tumor segmentation training set building module, a three-dimensional rough segmentation model module, a reinforcement learning network module and a two-dimensional fine segmentation module. model module;
所述胰腺肿瘤分割训练集构建模块用于采集胰腺癌患者胰腺CT图像并进行预处理,勾画CT图像胰腺肿瘤分割的标签,构建胰腺肿瘤分割训练集;The pancreatic tumor segmentation training set building module is used to collect and preprocess pancreatic CT images of patients with pancreatic cancer, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
所述三维粗分割模型模块用于获取获取胰腺感兴趣ROI区域,并将ROI区域的图像及其标签沿着z轴切分为2D图像;The three-dimensional rough segmentation model module is used to obtain the pancreas ROI region of interest, and segment the image of the ROI region and its label along the z-axis into 2D images;
所述强化学习网络模块用于从三维粗分割模型模块切分后的2D图像中选取两个参考层;The reinforcement learning network module is used to select two reference layers from the 2D image segmented by the three-dimensional coarse segmentation model module;
所述二维精分割模型模块用于将训练集的数据和标签沿着z轴切分为2D图像,并选择一个分割层,二维精分割模型模块包括两个交叉注意力特征融合子模块,分别对应两个参考层,所述两个交叉注意力特征融合子模块分别与分割层进行信息的交互,将参考层和分割层的特征维度统一,然后进行拼接操作,进行第一次融合,第一次融合结果与特征维度统一后的分割层特征进行点积操作,生成交叉注意力机制的信息相关矩阵,然后再与特征维度统一后的分割层特征进行点积操作进行第二次融合,并用残差操作将第二次融合结果与原始分割层特征的信息进行融合,得到肿瘤的分割结果。The two-dimensional fine segmentation model module is used to divide the data and labels of the training set into 2D images along the z-axis, and select a segmentation layer. The two-dimensional fine segmentation model module includes two cross-attention feature fusion sub-modules, Corresponding to two reference layers respectively, the two cross-attention feature fusion sub-modules interact with the segmentation layer respectively to unify the feature dimensions of the reference layer and the segmentation layer, and then perform a splicing operation to perform the first fusion. The first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second fusion, and used The residual operation fuses the second fusion result with the information of the original segmentation layer features to obtain the tumor segmentation result.
本发明的有益效果:Beneficial effects of the present invention:
1.利用强化学习网络从三维图像中选取两层2D图像作为参考层,不涉及层间信息的传递,为二维神经分割网络的分割提供一个可以参考的分割样例。1. Use the reinforcement learning network to select two layers of 2D images from the three-dimensional image as the reference layer, without involving the transfer of information between layers, and provide a reference segmentation example for the segmentation of the two-dimensional neural segmentation network.
2.利用交叉注意力机制学习不相邻2D图像的相关信息,既避免了2D神经网络无法利用层间信息来准确定位肿瘤的局限性,又避免了3D神经网络因为3D数据信息的冗余和干扰造成的肿瘤分割不准确问题。2. Use the cross-attention mechanism to learn relevant information of non-adjacent 2D images, which not only avoids the limitation of 2D neural networks that cannot use inter-layer information to accurately locate tumors, but also avoids the redundancy and redundancy of 3D data information in 3D neural networks. Inaccurate tumor segmentation caused by interference.
3.使用全自动化分割方法模拟临床医生的分割流程,训练和验证过程都不需要医生的介入。3. Use a fully automated segmentation method to simulate the clinician's segmentation process. The training and verification processes do not require doctor's intervention.
附图说明Description of the drawings
图1为本发明提供的一种基于强化学习和注意力的胰腺肿瘤图像分割方法流程图。Figure 1 is a flow chart of a pancreatic tumor image segmentation method based on reinforcement learning and attention provided by the present invention.
图2为本发明的交叉注意力特征融合模块示意图。Figure 2 is a schematic diagram of the cross-attention feature fusion module of the present invention.
图3为本发明的粗分割模型3D UNet结构示意图。Figure 3 is a schematic structural diagram of the coarse segmentation model 3D UNet of the present invention.
图4为本发明的精分割模型2D UNet结构示意图。Figure 4 is a schematic structural diagram of the precise segmentation model 2D UNet of the present invention.
图5为本发明的强化学习训练流程图。Figure 5 is a flow chart of reinforcement learning training of the present invention.
图6为本发明提供的一种基于强化学习和注意力的胰腺肿瘤图像分割系统示意图。Figure 6 is a schematic diagram of a pancreatic tumor image segmentation system based on reinforcement learning and attention provided by the present invention.
具体实施方式Detailed ways
以下结合附图对本发明具体实施方式作进一步详细说明。The specific embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
如图1所示,本发明提供的一种基于强化学习和注意力的胰腺肿瘤分割方法,实现步骤如下: As shown in Figure 1, the present invention provides a pancreatic tumor segmentation method based on reinforcement learning and attention. The implementation steps are as follows:
(1)胰腺肿瘤分割数据集建立和预处理(1) Creation and preprocessing of pancreatic tumor segmentation data set
(1.1)收集CT体数据,并做出这些数据的肝脏标准分割结果;采集胰腺癌患者胰腺CT图像,记为勾画CT图像胰腺肿瘤分割的标签,记为yj∈{0,1,2},其中|X|代表X中所有体素个数,xj表示X中的第j个体素,K表示z轴的层数,表示自然数集合,yj=0,yj=1,yj=2分别代表体素j属于背景,胰腺或胰腺肿瘤。记胰腺肿瘤分割数据集为S={Xr,Yr,r=1,...,N},其中N为CT图像个数,Xr为S中第r个胰腺癌患者胰腺CT图像,Yr为对应的CT图像胰腺肿瘤分割标签。将分割数据集划分为训练集Ttrain={Xn,Yn,n=1,...,Ntrain}和测试集Tval={Xn’,Yn’,n’=1,...,Nval},N=Ntrain+Nval,其中Ntrain为训练集个数,Nval为测试集个数,Xn为Ttrain中第n个胰腺癌患者胰腺CT图像,Yn为对应的CT图像胰腺肿瘤分割标签,Xn’为Tval中第n’个胰腺癌患者胰腺CT图像,Yn’为对应的CT图像胰腺肿瘤分割标签。(1.1) Collect CT volume data and make liver standard segmentation results of these data; collect pancreatic CT images of patients with pancreatic cancer, recorded as Outline the label for pancreatic tumor segmentation in CT images, recorded as y j ∈ {0, 1, 2}, where |X| represents the number of all voxels in X, x j represents the jth voxel in Represents a set of natural numbers, y j =0, y j =1, and y j =2 respectively represent that voxel j belongs to the background, pancreas or pancreatic tumor. Let the pancreatic tumor segmentation data set be S={X r , Y r , r=1,...,N}, where N is the number of CT images, X r is the pancreatic CT image of the r-th pancreatic cancer patient in S, Y r is the corresponding CT image pancreatic tumor segmentation label. The segmented data set is divided into a training set T train ={X n , Y n , n = 1, ..., N train } and a test set T val = {X n' , Y n' , n' = 1,. .., N val }, N=N train +N val , where N train is the number of training sets, N val is the number of test sets, X n is the pancreatic CT image of the nth pancreatic cancer patient in T train , Y n is the pancreatic tumor segmentation label of the corresponding CT image, X n' is the pancreatic CT image of the n'th pancreatic cancer patient in T val , and Y n' is the pancreatic tumor segmentation label of the corresponding CT image.
(1.2)将所有数据的x,y,z轴的体素空间距离space调整到1mm。将图像的HU值截断在[-100,240]之间,然后归一化到[0,1]之间。所述HU值即CT值,是测定人体某一局部组织或器官密度大小的一种计量单位,通常称亨氏单位(hounsfield unit,HU),空气为-1000,致密骨为+1000。(1.2) Adjust the voxel space distance of the x, y, and z axes of all data to 1mm. The HU value of the image is truncated between [-100, 240], and then normalized to between [0, 1]. The HU value is the CT value, which is a unit of measurement for measuring the density of a certain local tissue or organ in the human body. It is usually called Hounsfield unit (HU). Air is -1000, and dense bone is +1000.
(2)利用3D UNet网络构建用于胰腺CT粗分割的三维粗分割模型Mc,并进行训练。(2) Use the 3D UNet network to construct a three-dimensional rough segmentation model M c for pancreatic CT rough segmentation and conduct training.
(2.1)如图3所示,构建用于胰腺CT粗分割的3D UNet网络,记为三维粗分割模型Mc,该模型由编码和解码两部分组成,编码部分包括四个编码块,每个编码块后面均连接一个下采样层。解码部分包括四个解码块,每个解码块前面连接一个上采样层。每个编码块和解码块均由数目不等的卷积-激活层组成。利用训练集样本训练该网络,采用的损失函数为交叉熵损失函数LossCE
(2.1) As shown in Figure 3, a 3D UNet network for coarse segmentation of pancreatic CT is constructed, denoted as 3D coarse segmentation model M c . This model consists of two parts: encoding and decoding. The encoding part includes four encoding blocks, each Each coding block is followed by a downsampling layer. The decoding part consists of four decoding blocks, each of which is preceded by an upsampling layer. Each encoding block and decoding block consists of an varying number of convolutional-activation layers. The network is trained using training set samples, and the loss function used is the cross-entropy loss function Loss CE :
其中,代表网络输出的预测粗分割结果,m是输入图像中的像素点个数,yj和zj分别是像素点j的真实标签和预测标签,τ=0,1,2分别代表背景,胰腺或胰腺肿瘤;函数I(·)是示性函数,函数log为对数函数,p(·)为网络预测的概率函数。in, represents the predicted rough segmentation result output by the network, m is the number of pixels in the input image, y j and z j are the real label and predicted label of pixel j respectively, τ = 0, 1, 2 respectively represent the background, pancreas or Pancreatic tumor; function I(·) is an indicator function, function log is a logarithmic function, and p(·) is a probability function predicted by the network.
(2.2)通过Mc模型获取胰腺ROI(region of interest)区域。 (2.2) Obtain the pancreatic ROI (region of interest) area through the M c model.
通过模型Mc获得训练集Ttrain中3D CT图像Xn的预测概率图根据条件在3D CT图像Xn上对数据进行截断,生成一个长方体的矩形框,截断后CT图像记为获取的3D ROI区域沿着z轴分为2D图像,于是 表示切分后第k层的2D图像,截断后CT图像的标签记为同样沿着z轴分为2D图像,于是 表示对应第k层的2D图像标签,其中KROI为截断后的最小层号,KROI′为截断后的最大层号。Obtain the prediction probability map of 3D CT images X n in the training set T train through the model M c According to conditions The data is truncated on the 3D CT image Will Divided into a 2D image along the z-axis, then Represents the 2D image of the kth layer after segmentation. The label of the CT image after truncation is recorded as It is also divided into a 2D image along the z-axis, so Represents the 2D image label corresponding to the k-th layer, where K ROI is the minimum layer number after truncation, and K ROI ′ is the maximum layer number after truncation.
(3)利用带有交叉注意力特征融合模块的2D UNet网络构建二维精分割模型MF用于胰腺肿瘤分割(见图4),并进行预训练。(3) Use the 2D UNet network with a cross-attention feature fusion module to build a two-dimensional precision segmentation model M F for pancreatic tumor segmentation (see Figure 4) and perform pre-training.
(3.1)训练一个二维的精分割模型MF。该模型的主要作用是利用层间的交叉注意力特征融合模块,使得分割特征可以在主要分割层和参考层中进行信息的交互。(3.1) Train a two-dimensional fine segmentation model M F . The main function of this model is to use the cross-attention feature fusion module between layers so that segmentation features can interact with information in the main segmentation layer and the reference layer.
将训练集数据Xn和Yn按照3D ROI区域同样的方法进行切片,于是The training set data X n and Y n are arranged according to the 3D ROI area Slice in the same way, so
Xn=(xk′,k’∈[1,K]),xk’表示切分后第k’层的2D图像,Yn=(yk′,k’∈[1,K]),yk’表示切分后第k’层的2D图像的标签。从将步骤(2)中获取的3D ROI区域中随机选取两个切片2D图像,第a层和第b层其中a,b∈[KROI,KROI′]。利用作为参考层,对训练集数据Xn第c层进行分割。X n =(x k′ , k'∈[1, K]), x k′ represents the 2D image of the k′th layer after segmentation, Y n =(y k′ , k′∈[1, K]) , y k' represents the label of the 2D image of the k'th layer after segmentation. From the 3D ROI region obtained in step (2) Randomly select two slices of 2D images, layer a and layer b Among them, a, b∈[K ROI , K ROI ′]. use and As the reference layer, the cth layer of the training set data X is segmented.
(3.2)交叉注意力特征融合模块的实现。在模型中,本发明设计了两个基于交叉注意力机制的层间信息交互模块使得层间信息可以在参考层和分割层中进行交互,其中两个交叉注意力特征融合模块完全一致。对于参考层和分割层xk’=c而言,交叉注意力特征融合模块(见图2)的实现如下,对于参考层和分割层xk’=c的交互过程以及参考层和分割层xk’=c的交互过程一致:(3.2) Implementation of cross-attention feature fusion module. In the model, the present invention designs two inter-layer information interaction modules based on the cross-attention mechanism so that inter-layer information can interact in the reference layer and segmentation layer. The two cross-attention feature fusion modules are completely consistent. For reference layer For the segmentation layer x k'=c , the cross-attention feature fusion module (see Figure 2) is implemented as follows. For the reference layer The interaction process with the segmentation layer x k'=c and the reference layer It is consistent with the interaction process of segmentation layer x k'=c :
将参考层和分割层xk’=c经过下采样和多次卷积操作后分别得到高维度特征Fk=a和Fk’=c。Fk=a和Fk’=c作为交叉注意力特征融合模块的输入。Place the reference layer and segmentation layer x k'=c, respectively, to obtain high-dimensional features F k=a and F k'=c after downsampling and multiple convolution operations. F k=a and F k′=c serve as inputs to the cross-attention feature fusion module.
交叉注意力特征融合模块首先利用两个线性映射函数g(·)和f(·),将输入特征由三维转为一维,并将一维特征进行维度变换,使相关特征的维度保持一致。通过g(·)和f(·)对特征Fk=a和Fk’=c进行映射操作,使得特征的维度统一:
Fk=a′=g(Fk=a)
Fk’=c′=f(Fk’=c)
The cross-attention feature fusion module first uses two linear mapping functions g(·) and f(·) to convert the input features from three-dimensional to one-dimensional, and transform the one-dimensional features to make the dimensions of the related features consistent. The features F k=a and F k'=c are mapped through g(·) and f(·) to make the dimensions of the features unified:
F k=a ′=g(F k=a )
F k'=c ′=f(F k'=c )
将Fk=a′与Fk’=c′并行,再使用一个卷积核W1进行映射操作,对二者进行第一次融合,融合后的特征作为参考特征:
Fk=a||k’=c=W1Cat(Fk=a′||Fk’=c′)
Parallel F k=a ′ and F k′=c ′, and then use a convolution kernel W 1 for mapping operation, and fuse the two for the first time, and the fused features are used as reference features:
F k=a||k'=c =W 1 Cat(F k=a ′||F k′=c ′)
Cat(·)为沿通道方向的拼接操作。Cat(·) is the splicing operation along the channel direction.
利用Fk=a||k’=c和Fk’=c′进行点积操作,生成交叉注意力机制的信息相关矩阵A:
q=Wq(Fk=a||k’=c),k=Wk(Fk’=c′),v=Wv(Fk’=c′)
Use F k=a||k'=c and F k'=c ′ to perform dot product operations to generate the information correlation matrix A of the cross-attention mechanism:
q=W q (F k=a||k'=c ), k=W k (F k'=c ′), v=W v (F k'=c ′)
其中,Wq,Wk,Wv是三个卷积用于给与各个特征自适应的权重。sig(·)为sigmoid函数。D为特征Fk’=c′的通道数量。Among them, W q , W k , and W v are three convolutions used to give each feature adaptive weights. sig(·) is the sigmoid function. D is the number of channels with feature F k'=c '.
相关矩阵A与v进行点积操作,完成第二次融合,并用残差操作将Fk’=c的信息融合到v′:
v′=Av+f′(Fk’=c)
The correlation matrix A and v perform a dot product operation to complete the second fusion, and the residual operation is used to fuse the information of F k'=c to v':
v'=Av+f'(F k'=c )
其中,f′(·)为线性映射函数。Among them, f′(·) is a linear mapping function.
(3.3)二维精分割模型MF的预训练。将以及xk’=c作为输入,xk’=c的预测结果yk’=c为输出,Dice Loss为损失函数进行负反馈学习,训练二维精分割模型MF(3.3) Pre-training of the two-dimensional fine segmentation model M F. Will And x k'=c is used as the input, the prediction result y k' = c of x k'=c is the output, Dice Loss is the loss function for negative feedback learning, and the two-dimensional fine segmentation model M F is trained.
其中Dice Loss的定义为:
The definition of Dice Loss is:
其中,m’是输入2D图像中的像素点个数,yh和zh分别是像素点h的真实标签和预测标签。Among them, m' is the number of pixels in the input 2D image, y h and z h are the real label and predicted label of pixel h respectively.
(4)强化学习网络训练。(4) Reinforcement learning network training.
(4.1)利用强化学习网络Q选取胰腺肿瘤分割层。(4.1) Use reinforcement learning network Q to select pancreatic tumor segmentation layers.
强化学习网络由一个3D ResNet网络构成,其输出为一个向量,映射到代理的动作空间。整个强化学习框架可以分为以下几个部分:环境(Environment),代理(Agents),状态(States),行动(Action),启发函数(Reword)以及损失函数。本发明就每个部分的含义及强化学习的过程进行说明: The reinforcement learning network consists of a 3D ResNet network whose output is a vector that maps to the agent's action space. The entire reinforcement learning framework can be divided into the following parts: Environment, Agents, States, Action, Reword and Loss function. This invention explains the meaning of each part and the process of reinforcement learning:
环境从原始CT图像获取的ROI区域作为整个强化学习的环境。environment ROI area obtained from original CT image as the entire reinforcement learning environment.
代理Agent:为了选取参考层第a层和第b层本发明设置两个代理Agent1和Agent2Agent: In order to select the reference layer a-th layer and layer b This invention sets up two agents Agent 1 and Agent 2 .
状态:s(t)定义在迭代次数t从中被强化学习网络选择的两个参考层第a层和第b层,其初始状态为从中沿z轴随机选取的两层切片。State: s (t) is defined at the iteration number t from The initial state of the two reference layers layer a and layer b selected by the reinforcement learning network is from Two slices randomly selected along the z-axis.
行动Action:Agent1和Agent2的行动策略函数为π(act(t)|st(t)),这里本发明选择贪婪策略,对动作空间中所有的动作进行遍历,st(t)和act(t)分别是状态和当前代理的动作。是代理Agent1和Agent2的动作空间具体为{-3,-2,-1,0,1,2,3,Stop},每个动作表示每次迭代Agent1和Agent2在上次选择的参考层沿z轴进行前后移动。最后的Stop操作表示Q选择终止,表示Agent1和Agent2找不到可以再提升的参考层。Action: The action strategy function of Agent 1 and Agent 2 is π(act (t) |st (t) ). Here, the present invention chooses a greedy strategy and traverses all actions in the action space. st (t) and act ( t) are the state and the action of the current agent respectively. is the action space of Agent 1 and Agent 2 , specifically {-3, -2, -1, 0, 1, 2, 3, Stop}. Each action represents the last choice of Agent 1 and Agent 2 in each iteration. The reference layer moves forward and backward along the z-axis. The final Stop operation indicates the termination of Q selection, indicating that Agent 1 and Agent 2 cannot find a reference layer that can be upgraded.
动作价值函数:本发明用一个CT图像Xn的所有2D体数据层在二维精分割模型MF的预测结果的集合与真实的标签Y的Dice loss来表示:
Action value function: This invention uses the set of prediction results of all 2D volume data layers of a CT image X n in the two-dimensional fine segmentation model M F Dice loss with the real label Y is expressed as:
启发函数:启发函数用于计算当前状态下的下一步动作的最大奖励值:
Heuristic function: The heuristic function is used to calculate the maximum reward value of the next action in the current state:
其中,γ∈[0,1]是衰减系数,动作越多,受益越小。Among them, γ∈[0, 1] is the attenuation coefficient. The more actions, the smaller the benefit.
损失函数:在迭代过程中,利用负反馈的方法对强化学习网络进行训练,使得代理Agent1和Agent2可以快速准确地找到最合适的参考层。第t次迭代的损失函数可以表示为:
Loss function: During the iteration process, the negative feedback method is used to train the reinforcement learning network, so that agents Agent 1 and Agent 2 can quickly and accurately find the most appropriate reference layer. The loss function of the t-th iteration can be expressed as:
强化学习网络的训练步骤说明(见图5):Instructions for the training steps of the reinforcement learning network (see Figure 5):
在一次迭代t中,强化学习网络使得代理Agent1和Agent2从环境中选取两个参考层第a层和第b层记为状态s(t)。将xk’=c输入到二维精分割模型MF中,得到当前动作的价值函数R(st(t),act(t))。利用贪心算法,穷举求得当前最大奖励值Q(st(t),act(t)),进而求得用于负反馈的损失函数Lt,更新强化学习网络Q的权重。In one iteration t, the reinforcement learning network makes agents Agent 1 and Agent 2 learn from the environment Select two reference layers, layer a and layer b Denote it as state s (t) . Will x k'=c is input into the two-dimensional fine segmentation model M F to obtain the value function R (st (t) , act (t) ) of the current action. Use the greedy algorithm to exhaustively find the current maximum reward value Q(st (t) , act (t) ), then find the loss function L t for negative feedback, and update the weight of the reinforcement learning network Q.
(5)固定强化学习网络,更新二维精分割模型MF模型权重。(5) Fix the reinforcement learning network and update the weights of the two-dimensional fine segmentation model M F model.
强化学习网络训练后,固定强化学习网络的参数。利用强化学习网络筛选参考层第a层和第b层将参考层与分割层xk’=c输入到模型MF中完成二维精分割模型训练。 After the reinforcement learning network is trained, the parameters of the reinforcement learning network are fixed. Use reinforcement learning network to filter reference layer a and layer b Input the reference layer and segmentation layer x k'=c into the model M F to complete the two-dimensional fine segmentation model training.
(6)胰腺肿瘤的自动分割。(6) Automatic segmentation of pancreatic tumors.
(6.1)对给定的测试集中的测试图像,进行重采样和灰度值调整,并将图像的HU值截断在[-100,240]之间,然后归一化到[0,1]。将处理后的测试图像输入到三维粗分割模型Mc中,得到胰腺和肿瘤的分割概率图并根据获取ROI区域 (6.1) Resample and adjust the grayscale value of the test image in the given test set, and truncate the HU value of the image between [-100, 240], and then normalize it to [0, 1]. The processed test image is input into the three-dimensional rough segmentation model M c to obtain the segmentation probability map of the pancreas and tumor. and based on Get ROI area
(6.2)将ROI区域输入到强化学习网络Q,获取参考体数据层数参考层。(6.2) Change the ROI area Input to the reinforcement learning network Q to obtain the reference volume reference layer.
(6.3)将测试图像沿体数据层逐层分为2D图像,选择一个分割层,将分割层和参考层输入到MF进行分割,得到肿瘤的分割结果。(6.3) Divide the test image into 2D images layer by layer along the volume data layer, select a segmentation layer, input the segmentation layer and reference layer to MF for segmentation, and obtain the tumor segmentation result.
另一方面,如图6所示,本发明还提供了一种基于强化学习和注意力的胰腺肿瘤图像分割系统,该系统包括胰腺肿瘤分割训练集构建模块、三维粗分割模型模块、强化学习网络模块和二维精分割模型模块;On the other hand, as shown in Figure 6, the present invention also provides a pancreatic tumor image segmentation system based on reinforcement learning and attention. The system includes a pancreatic tumor segmentation training set building module, a three-dimensional coarse segmentation model module, and a reinforcement learning network. module and two-dimensional fine segmentation model module;
所述胰腺肿瘤分割训练集构建模块用于采集胰腺癌患者胰腺CT图像并进行预处理,勾画CT图像胰腺肿瘤分割的标签,构建胰腺肿瘤分割训练集;The pancreatic tumor segmentation training set building module is used to collect and preprocess pancreatic CT images of patients with pancreatic cancer, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
所述三维粗分割模型模块用于获取获取胰腺感兴趣ROI区域,并将ROI区域的图像及其标签沿着z轴切分为2D图像;The three-dimensional rough segmentation model module is used to obtain the pancreas ROI region of interest, and segment the image of the ROI region and its label along the z-axis into 2D images;
所述强化学习网络模块用于从三维粗分割模型模块切分后的2D图像中选取两个参考层;The reinforcement learning network module is used to select two reference layers from the 2D image segmented by the three-dimensional coarse segmentation model module;
所述二维精分割模型模块用于将训练集的数据和标签沿着z轴切分为2D图像,并选择一个分割层,二维精分割模型模块包括两个交叉注意力特征融合子模块,分别对应两个参考层,所述两个交叉注意力特征融合子模块分别与分割层进行信息的交互,将参考层和分割层的特征维度统一,然后进行拼接操作,进行第一次融合,第一次融合结果与特征维度统一后的分割层特征进行点积操作,生成交叉注意力机制的信息相关矩阵,然后再与特征维度统一后的分割层特征进行点积操作进行第二次融合,并用残差操作将第二次融合结果与原始分割层特征的信息进行融合,得到胰腺肿瘤的分割结果。The two-dimensional fine segmentation model module is used to divide the data and labels of the training set into 2D images along the z-axis, and select a segmentation layer. The two-dimensional fine segmentation model module includes two cross-attention feature fusion sub-modules, Corresponding to two reference layers respectively, the two cross-attention feature fusion sub-modules interact with the segmentation layer respectively to unify the feature dimensions of the reference layer and the segmentation layer, and then perform a splicing operation to perform the first fusion. The first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second fusion, and used The residual operation fuses the second fusion result with the information of the original segmentation layer features to obtain the segmentation result of pancreatic tumors.
以下为本发明的一个具体实施例The following is a specific embodiment of the present invention
本实例使用公开数据集医学马拉松数据集(The Medical Segmentation Decathlon,MSD)胰腺肿瘤分割数据集的CT图像数据进行研究。MSD数据集共有281例胰腺肿瘤数据。This example uses the CT image data of the public dataset Medical Segmentation Decathlon (MSD) pancreatic tumor segmentation dataset for research. There are 281 cases of pancreatic tumor data in the MSD data set.
本发明将数据划分训练集224例,测试集57例。训练集的数据用于训练三维粗分割模型Mc,强化学习网络Q和二维精分割模型MF,测试集用于测试模型的性能。本发明采用DSC系数,Jaccard系数,准确率Precision和召回率Recall来评估2D UNet,3D UNet网络。This invention divides the data into a training set of 224 cases and a test set of 57 cases. The data of the training set is used to train the three-dimensional coarse segmentation model M c , the reinforcement learning network Q and the two-dimensional fine segmentation model M F , and the test set is used to test the performance of the model. This invention uses DSC coefficient, Jaccard coefficient, accuracy Precision and recall rate Recall to evaluate 2D UNet and 3D UNet networks.
另外,为了验证交叉注意力特征融合模块的有效性,本发明增加了一个去掉强化学习网络的仿真过程,从ROI中的随机选取参考层,与本发明进行比较,结果如表1所示。 In addition, in order to verify the effectiveness of the cross-attention feature fusion module, the present invention adds a simulation process that removes the reinforcement learning network, randomly selects a reference layer from the ROI, and compares it with the present invention. The results are shown in Table 1.
表1.基于强化学习与交叉注意力的分割方法与其他方法在胰腺肿瘤分割的对比结果
Table 1. Comparison results between segmentation methods based on reinforcement learning and cross-attention and other methods in pancreatic tumor segmentation
由结果可知,基于强化学习与注意力的胰腺肿瘤图像分割方法相比于其他方法策略取得了最佳效果。相较于2D UNet网络和3DUNet网络,参考层和交叉注意力的引入可以增强2D网络对分割目标的识别和定位,同时避免3D网络引入过多的冗余信息,造成分割困难。另外,强化学习方法可以更好的减少错误伪标签在模型训练过程中的传播与累积(准确率提高8.67%)。相比其它方法,本发明在胰腺肿瘤分割上获得了最好的结果。It can be seen from the results that the pancreatic tumor image segmentation method based on reinforcement learning and attention achieved the best results compared with other method strategies. Compared with the 2D UNet network and the 3DUNet network, the introduction of the reference layer and cross attention can enhance the recognition and positioning of segmentation targets by the 2D network, while avoiding the 3D network from introducing too much redundant information, causing segmentation difficulties. In addition, the reinforcement learning method can better reduce the spread and accumulation of false false labels during the model training process (the accuracy rate increases by 8.67%). Compared with other methods, the present invention achieves the best results in pancreatic tumor segmentation.
上述实施例用来解释说明本发明,而不是对本发明进行限制,在本发明的精神和权利要求的保护范围内,对本发明作出的任何修改和改变,都落入本发明的保护范围。 The above embodiments are used to illustrate the present invention, rather than to limit the present invention. Within the spirit of the present invention and the protection scope of the claims, any modifications and changes made to the present invention fall within the protection scope of the present invention.

Claims (10)

  1. 一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,该方法包括以下步骤:A pancreatic tumor image segmentation method based on reinforcement learning and attention, characterized in that the method includes the following steps:
    (1)采集胰腺癌患者胰腺CT图像并进行预处理,勾画CT图像胰腺肿瘤分割的标签,构建胰腺肿瘤分割训练集;(1) Collect pancreatic CT images of patients with pancreatic cancer and perform preprocessing, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
    (2)构建用于胰腺CT粗分割的三维粗分割模型,获取获取胰腺感兴趣ROI区域,并将ROI区域的图像及其标签沿着z轴切分为2D图像;(2) Construct a three-dimensional rough segmentation model for pancreatic CT rough segmentation, obtain the pancreatic ROI region of interest, and segment the image and label of the ROI region into 2D images along the z-axis;
    (3)构建带有交叉注意力特征融合模块的二维精分割模型,利用层间的交叉注意力特征融合模块,使得分割特征在分割层和参考层中进行信息的交互;(3) Construct a two-dimensional precise segmentation model with a cross-attention feature fusion module, and use the cross-attention feature fusion module between layers to enable segmentation features to interact with information in the segmentation layer and the reference layer;
    (3.1)将训练集的数据和标签按照和步骤(2)中ROI区域的图像同样的方式沿着z轴切分为2D图像,随机选取步骤(2)中切分后的两个2D图像作为参考层,训练集数据切分后的2D图像作为分割层;利用强化学习网络选取胰腺肿瘤的其他参考层;强化学习网络的环境为从原始CT图像获取的ROI区域,状态为沿z轴随机选取的两层切片,动作为每次迭代代理在上次选择的参考层沿z轴进行前后移动,每个参考层对应一个代理;(3.1) Split the data and labels of the training set into 2D images along the z-axis in the same way as the image of the ROI area in step (2), and randomly select the two 2D images segmented in step (2) as Reference layer, the 2D image after segmentation of the training set data is used as the segmentation layer; the reinforcement learning network is used to select other reference layers of pancreatic tumors; the environment of the reinforcement learning network is the ROI area obtained from the original CT image, and the state is randomly selected along the z-axis The two-layer slice, the action is that each iteration agent moves forward and backward along the z-axis in the last selected reference layer, and each reference layer corresponds to an agent;
    (3.2)每个参考层对应一个交叉注意力特征融合模块,分别与分割层进行信息的交互,交叉注意力特征融合模块将参考层和分割层的特征维度统一,然后进行拼接操作,进行第一次融合,第一次融合结果与特征维度统一后的分割层特征进行点积操作,生成交叉注意力机制的信息相关矩阵,然后再与特征维度统一后的分割层特征进行点积操作进行第二次融合,并用残差操作将第二次融合结果与原始分割层特征的信息进行融合,作为分割结果;(3.2) Each reference layer corresponds to a cross-attention feature fusion module, which interacts with the segmentation layer. The cross-attention feature fusion module unifies the feature dimensions of the reference layer and the segmentation layer, and then performs a splicing operation to perform the first step. For the second fusion, the first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second time. Second fusion, and use the residual operation to fuse the second fusion result with the information of the original segmentation layer features as the segmentation result;
    (4)给定一个待分割的胰腺肿瘤图像并进行预处理后,输入到三维粗分割模型中获取ROI区域并进行切分,利用强化学习网络选取参考层,将待分割的胰腺肿瘤图像切分后选择一个分割层,将分割层和参考层输入到二维精分割模型进行分割,得到胰腺肿瘤的分割结果。(4) Given a pancreatic tumor image to be segmented and preprocessed, input it into the three-dimensional rough segmentation model to obtain the ROI area and segment it, use the reinforcement learning network to select the reference layer, and segment the pancreatic tumor image to be segmented. Finally, a segmentation layer is selected, and the segmentation layer and reference layer are input into the two-dimensional fine segmentation model for segmentation, and the segmentation results of pancreatic tumors are obtained.
  2. 根据权利要求1所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,步骤(1)中,预处理过程具体为:将训练集中所有数据的体素空间距离space调整到1mm;将图像的HU值截断到-100至240之间,然后归一化到0到1之间。A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 1, characterized in that in step (1), the preprocessing process is specifically: adjusting the voxel space distance of all data in the training set to 1mm; truncate the HU value of the image to between -100 and 240, and then normalize it to between 0 and 1.
  3. 根据权利要求1所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,步骤(2)中,三维粗分割模型由编码和解码两部分组成,编码部分包括四个编码块,每个编码块后面均连接一个下采样层;解码部分包括四个解码块,每个解码块前面连接一个上采样层;每个编码块和解码块均由数目不等的卷积-激活层组成。A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 1, characterized in that in step (2), the three-dimensional rough segmentation model consists of two parts: encoding and decoding, and the encoding part includes four encoding block, each coding block is connected to a downsampling layer; the decoding part includes four decoding blocks, and each decoding block is connected to an upsampling layer; each coding block and decoding block are composed of different numbers of convolution-activation layer composition.
  4. 根据权利要求1所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征 在于,步骤(2)中,将ROI区域图像记为对应训练集中第n个胰腺癌患者胰腺CT图像;将沿着z轴切分为2D图像,于是表示切分后第k层的2D图像,截断后CT图像的标签记为对应训练集中第n个胰腺癌患者胰腺CT图像的标签,同样沿着z轴切分为2D图像,于是表示对应第k层的2D图像标签,其中KROI为截断后的最小层号,KROI′为截断后的最大层号。A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 1, characterized by That is, in step (2), the ROI area image is recorded as Corresponds to the pancreatic CT image of the nth pancreatic cancer patient in the training set; Divide it into a 2D image along the z-axis, so Represents the 2D image of the kth layer after segmentation. The label of the CT image after truncation is recorded as The label corresponding to the pancreatic CT image of the nth pancreatic cancer patient in the training set is also divided into 2D images along the z-axis, so Represents the 2D image label corresponding to the k-th layer, where K ROI is the minimum layer number after truncation, and K ROI ′ is the maximum layer number after truncation.
  5. 根据权利要求1所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,步骤(2)中,三维粗分割模型采用的损失函数为交叉熵损失函数LossCE
    A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 1, characterized in that, in step (2), the loss function used in the three-dimensional rough segmentation model is the cross-entropy loss function Loss CE :
    其中,代表网络输出的预测粗分割结果,Yn为CT图像胰腺肿瘤分割标签,m是输入图像中的像素点个数,yj和zj分别是像素点j的真实标签和预测标签,τ=0,1,2分别代表背景,胰腺或胰腺肿瘤;函数I(·)是示性函数,函数log为对数函数,p(·)为模型预测的概率函数。in, Represents the predicted rough segmentation result output by the network, Y n is the pancreatic tumor segmentation label of the CT image, m is the number of pixels in the input image, y j and z j are the real label and predicted label of pixel j respectively, τ = 0 , 1 and 2 respectively represent the background, pancreas or pancreatic tumor; function I(·) is an indicative function, function log is a logarithmic function, and p(·) is the probability function predicted by the model.
  6. 根据权利要求1所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,步骤(3.1)中,强化学习网络的动作价值函数为二维精分割模型预测结果与真实标签的损失函数,并通过启发函数计算当前状态下的下一步动作的最大奖励值;在迭代过程中,利用负反馈的方法对强化学习网络进行训练。A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 1, characterized in that in step (3.1), the action value function of the reinforcement learning network is the prediction result of the two-dimensional fine segmentation model and the real label The loss function is used, and the maximum reward value of the next action in the current state is calculated through the heuristic function; in the iterative process, the reinforcement learning network is trained using the negative feedback method.
  7. 根据权利要求6所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,强化学习网络训练后,固定强化学习网络的参数,利用强化学习网络筛选参考层将参考层与分割层输入到二维精分割模型完成二维精分割模型训练。A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 6, characterized in that after the reinforcement learning network is trained, the parameters of the reinforcement learning network are fixed, and the reinforcement learning network is used to filter the reference layer and combine the reference layer with The segmentation layer is input to the two-dimensional fine segmentation model to complete the two-dimensional fine segmentation model training.
  8. 根据权利要求4所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,步骤(3.2)中,将两个参考层分别记为将分割层记为xk′=c,xk′切分后第k′层的2D图像,对于参考层和分割层xk′=c的交互过程以及参考层和分割层xk′=c的交互过程一致;对于参考层和分割层xk′=c而言,交叉注意力特征融合模块的实现如下:A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 4, characterized in that in step (3.2), the two reference layers are respectively recorded as and The segmentation layer is recorded as x k′=c . The 2D image of the k′th layer after x k′ is segmented. For the reference layer The interaction process with the segmentation layer x k′=c and the reference layer It is consistent with the interaction process of the segmentation layer x k′=c ; for the reference layer For the segmentation layer x k′=c , the implementation of the cross-attention feature fusion module is as follows:
    将参考层和分割层xk′=c经过下采样和多次卷积操作后分别得到高维度特征Fk=a和Fk′=c;Fk=a和Fk′=c作为交叉注意力特征融合模块的输入;Place the reference layer and segmentation layer x k′=c. After downsampling and multiple convolution operations, high-dimensional features F k=a and F k′=c are obtained respectively; F k=a and F k′=c are used as cross-attention feature fusion. module input;
    交叉注意力特征融合模块首先利用两个线性映射函数g(·)和f(·),将输入特征由三维转为一维,并将一维特征进行维度变换,使相关特征的维度保持一致;通过g(·)和f(·)对特征 Fk=a和Fk′=c进行映射操作,使得特征的维度统一:
    Fk′=a′=g(Fk=a)
    Fk′=c′=f(Fk′=c)
    The cross-attention feature fusion module first uses two linear mapping functions g(·) and f(·) to convert the input features from three-dimensional to one-dimensional, and transform the one-dimensional features to keep the dimensions of the related features consistent; Pair features through g(·) and f(·) F k=a and F k′=c perform mapping operations to make the dimensions of the features unified:
    F k′=a ′=g(F k=a )
    F k′=c ′=f (F k′=c )
    将Fk=a′与Fk′=c′并行,再使用一个卷积核W1进行映射操作,对二者进行第一次融合,融合后的特征作为参考特征:
    Fk=a||k′=c=W1Cat(Fk=a′||Fk′=c′)
    Parallel F k = a ′ and F k ′ = c ′, and then use a convolution kernel W 1 for mapping operation, and fuse the two for the first time. The fused features are used as reference features:
    F k=a||k′=c =W 1 Cat(F k=a ′||F k′=c ′)
    Cat(·)为沿通道方向的拼接操作;Cat(·) is the splicing operation along the channel direction;
    利用Fk=a||k′=c和Fk′=c′进行点积操作,生成交叉注意力机制的信息相关矩阵A:
    q=Wq(Fk=a||k′=c),k=Wk(Fk′=c′),v=Wv(Fk′=c′)
    Use F k=a||k′=c and F k′=c ′ to perform dot product operations to generate the information correlation matrix A of the cross-attention mechanism:
    q=W q (F k=a||k′=c ), k=W k (F k′=c ′), v=W v (F k′=c ′)
    其中,Wq,Wk,Wv是三个卷积用于给与各个特征自适应的权重;sig(·)为sigmoid函数;D为特征Fk′=c′的通道数量;Among them, W q , W k , and W v are the weights of three convolutions used to adapt each feature; sig(·) is the sigmoid function; D is the number of channels of the feature F k′=c ′;
    信息相关矩阵A与v进行点积操作,完成第二次融合,并用残差操作将Fk′=c的信息融合到v′:
    v′=Av+f′(Fk′=c)
    The information correlation matrix A and v perform a dot product operation to complete the second fusion, and the residual operation is used to fuse the information of F k′=c to v′:
    v′=Av+f′(F k′=c )
    其中,f′(·)为线性映射函数。Among them, f′(·) is a linear mapping function.
  9. 根据权利要求5所述的一种基于强化学习和注意力的胰腺肿瘤图像分割方法,其特征在于,步骤(3)中,二维精分割模型将分割层和参考层作为输入,分割层的预测结果作为输出,采用损失函数Dice Loss进行负反馈学习:
    A pancreatic tumor image segmentation method based on reinforcement learning and attention according to claim 5, characterized in that, in step (3), the two-dimensional fine segmentation model takes the segmentation layer and the reference layer as input, and the prediction of the segmentation layer The result is used as the output, and the loss function Dice Loss is used for negative feedback learning:
    其中,m′是输入2D图像中的像素点个数,yk′表示切分后第k′层的2D图像的标签,yk′=c为分割层的预测结果,yh和zh分别是像素点h的真实标签和预测标签。Among them, m′ is the number of pixels in the input 2D image, y k′ represents the label of the 2D image of the k′ layer after segmentation, y k′=c is the prediction result of the segmentation layer, y h and z h are respectively are the true labels and predicted labels of pixel h.
  10. 一种基于强化学习和注意力的胰腺肿瘤图像分割系统,其特征在于,该系统包括胰腺肿瘤分割训练集构建模块、三维粗分割模型模块、强化学习网络模块和二维精分割模型模块;A pancreatic tumor image segmentation system based on reinforcement learning and attention, characterized in that the system includes a pancreatic tumor segmentation training set building module, a three-dimensional coarse segmentation model module, a reinforcement learning network module and a two-dimensional fine segmentation model module;
    所述胰腺肿瘤分割训练集构建模块用于采集胰腺癌患者胰腺CT图像并进行预处理,勾画CT图像胰腺肿瘤分割的标签,构建胰腺肿瘤分割训练集; The pancreatic tumor segmentation training set building module is used to collect and preprocess pancreatic CT images of patients with pancreatic cancer, outline pancreatic tumor segmentation labels on CT images, and construct a pancreatic tumor segmentation training set;
    所述三维粗分割模型模块用于获取获取胰腺感兴趣ROI区域,并将ROI区域的图像及其标签沿着轴z切分为2D图像;The three-dimensional coarse segmentation model module is used to obtain the pancreas ROI region of interest, and segment the image of the ROI region and its label along the axis z into 2D images;
    所述强化学习网络模块用于从三维粗分割模型模块切分后的2D图像中选取两个参考层;The reinforcement learning network module is used to select two reference layers from the 2D image segmented by the three-dimensional coarse segmentation model module;
    所述二维精分割模型模块用于将训练集的数据和标签沿着z轴切分为2D图像,并选择一个分割层,二维精分割模型模块包括两个交叉注意力特征融合子模块,分别对应两个参考层,所述两个交叉注意力特征融合子模块分别与分割层进行信息的交互,将参考层和分割层的特征维度统一,然后进行拼接操作,进行第一次融合,第一次融合结果与特征维度统一后的分割层特征进行点积操作,生成交叉注意力机制的信息相关矩阵,然后再与特征维度统一后的分割层特征进行点积操作进行第二次融合,并用残差操作将第二次融合结果与原始分割层特征的信息进行融合,得到胰腺肿瘤的分割结果。 The two-dimensional fine segmentation model module is used to divide the data and labels of the training set into 2D images along the z-axis, and select a segmentation layer. The two-dimensional fine segmentation model module includes two cross-attention feature fusion sub-modules, Corresponding to two reference layers respectively, the two cross-attention feature fusion sub-modules interact with the segmentation layer respectively to unify the feature dimensions of the reference layer and the segmentation layer, and then perform a splicing operation to perform the first fusion. The first fusion result is subjected to a dot product operation with the segmentation layer features after the feature dimensions are unified to generate an information correlation matrix of the cross attention mechanism, and then the dot product operation is performed with the segmentation layer features after the feature dimensions are unified for the second fusion, and used The residual operation fuses the second fusion result with the information of the original segmentation layer features to obtain the segmentation result of pancreatic tumors.
PCT/CN2023/094394 2022-05-19 2023-05-16 Pancreatic tumor image segmentation method and system based on reinforcement learning and attention WO2023221954A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210543491.9A CN114663431B (en) 2022-05-19 2022-05-19 Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
CN202210543491.9 2022-05-19

Publications (1)

Publication Number Publication Date
WO2023221954A1 true WO2023221954A1 (en) 2023-11-23

Family

ID=82037025

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/094394 WO2023221954A1 (en) 2022-05-19 2023-05-16 Pancreatic tumor image segmentation method and system based on reinforcement learning and attention

Country Status (2)

Country Link
CN (1) CN114663431B (en)
WO (1) WO2023221954A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291913A (en) * 2023-11-24 2023-12-26 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure
CN117393043A (en) * 2023-12-11 2024-01-12 浙江大学 Thyroid papilloma BRAF gene mutation detection device
CN117422715A (en) * 2023-12-18 2024-01-19 华侨大学 Global information-based breast ultrasonic tumor lesion area detection method
CN117455935A (en) * 2023-12-22 2024-01-26 中国人民解放军总医院第一医学中心 Abdominal CT (computed tomography) -based medical image fusion and organ segmentation method and system
CN117593292A (en) * 2024-01-18 2024-02-23 江西师范大学 CT image target detection method based on three-dimensional orthogonal attention

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663431B (en) * 2022-05-19 2022-08-30 浙江大学 Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
CN115359881B (en) * 2022-10-19 2023-04-07 成都理工大学 Nasopharyngeal carcinoma tumor automatic delineation method based on deep learning
CN116189166A (en) * 2023-02-07 2023-05-30 台州勃美科技有限公司 Meter reading method and device and robot
CN116109605B (en) * 2023-02-13 2024-04-02 北京医智影科技有限公司 Medical image tumor segmentation system, training set construction method and model training method
CN116309385B (en) * 2023-02-27 2023-10-10 之江实验室 Abdominal fat and muscle tissue measurement method and system based on weak supervision learning
CN115954106B (en) * 2023-03-15 2023-05-12 吉林华瑞基因科技有限公司 Tumor model optimizing system based on computer-aided simulation
CN116468741B (en) * 2023-06-09 2023-09-22 南京航空航天大学 Pancreatic cancer segmentation method based on 3D physical space domain and spiral decomposition space domain

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047082A (en) * 2019-03-27 2019-07-23 深圳大学 Pancreatic Neuroendocrine Tumors automatic division method and system based on deep learning
WO2022076367A1 (en) * 2020-10-05 2022-04-14 Memorial Sloan Kettering Cancer Center Reinforcement learning to perform localization, segmentation, and classification on biomedical images
CN114494289A (en) * 2022-01-13 2022-05-13 同济大学 Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network
CN114663431A (en) * 2022-05-19 2022-06-24 浙江大学 Pancreatic tumor image segmentation method and system based on reinforcement learning and attention

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801294A (en) * 2018-12-14 2019-05-24 深圳先进技术研究院 Three-dimensional atrium sinistrum dividing method, device, terminal device and storage medium
CN111091575B (en) * 2019-12-31 2022-10-18 电子科技大学 Medical image segmentation method based on reinforcement learning method
CN111415342B (en) * 2020-03-18 2023-12-26 北京工业大学 Automatic detection method for pulmonary nodule images of three-dimensional convolutional neural network by fusing attention mechanisms
US11526698B2 (en) * 2020-06-05 2022-12-13 Adobe Inc. Unified referring video object segmentation network
CN112116605B (en) * 2020-09-29 2022-04-22 西北工业大学深圳研究院 Pancreas CT image segmentation method based on integrated depth convolution neural network
CN112201328B (en) * 2020-10-09 2022-06-21 浙江德尚韵兴医疗科技有限公司 Breast mass segmentation method based on cross attention mechanism
CN113221987A (en) * 2021-04-30 2021-08-06 西北工业大学 Small sample target detection method based on cross attention mechanism
CN114119515A (en) * 2021-11-14 2022-03-01 北京工业大学 Brain tumor detection method based on attention mechanism and MRI multi-mode fusion
CN114219943B (en) * 2021-11-24 2023-05-26 华南理工大学 CT image organ at risk segmentation system based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047082A (en) * 2019-03-27 2019-07-23 深圳大学 Pancreatic Neuroendocrine Tumors automatic division method and system based on deep learning
WO2022076367A1 (en) * 2020-10-05 2022-04-14 Memorial Sloan Kettering Cancer Center Reinforcement learning to perform localization, segmentation, and classification on biomedical images
CN114494289A (en) * 2022-01-13 2022-05-13 同济大学 Pancreatic tumor image segmentation processing method based on local linear embedded interpolation neural network
CN114663431A (en) * 2022-05-19 2022-06-24 浙江大学 Pancreatic tumor image segmentation method and system based on reinforcement learning and attention

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291913A (en) * 2023-11-24 2023-12-26 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure
CN117291913B (en) * 2023-11-24 2024-04-16 长江勘测规划设计研究有限责任公司 Apparent crack measuring method for hydraulic concrete structure
CN117393043A (en) * 2023-12-11 2024-01-12 浙江大学 Thyroid papilloma BRAF gene mutation detection device
CN117393043B (en) * 2023-12-11 2024-02-13 浙江大学 Thyroid papilloma BRAF gene mutation detection device
CN117422715A (en) * 2023-12-18 2024-01-19 华侨大学 Global information-based breast ultrasonic tumor lesion area detection method
CN117422715B (en) * 2023-12-18 2024-03-12 华侨大学 Global information-based breast ultrasonic tumor lesion area detection method
CN117455935A (en) * 2023-12-22 2024-01-26 中国人民解放军总医院第一医学中心 Abdominal CT (computed tomography) -based medical image fusion and organ segmentation method and system
CN117455935B (en) * 2023-12-22 2024-03-19 中国人民解放军总医院第一医学中心 Abdominal CT (computed tomography) -based medical image fusion and organ segmentation method and system
CN117593292A (en) * 2024-01-18 2024-02-23 江西师范大学 CT image target detection method based on three-dimensional orthogonal attention
CN117593292B (en) * 2024-01-18 2024-04-05 江西师范大学 CT image target detection method based on three-dimensional orthogonal attention

Also Published As

Publication number Publication date
CN114663431B (en) 2022-08-30
CN114663431A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
WO2023221954A1 (en) Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
CN113870258B (en) Counterwork learning-based label-free pancreas image automatic segmentation system
CN114240962B (en) CT image liver tumor region automatic segmentation method based on deep learning
CN110675406A (en) CT image kidney segmentation algorithm based on residual double-attention depth network
CN113674253B (en) Automatic segmentation method for rectal cancer CT image based on U-transducer
WO2023071531A1 (en) Liver ct automatic segmentation method based on deep shape learning
WO2021115312A1 (en) Method for automatically sketching contour line of normal organ in medical image
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
Xie et al. SERU: A cascaded SE‐ResNeXT U‐Net for kidney and tumor segmentation
CN114066866A (en) Medical image automatic segmentation method based on deep learning
CN114998265A (en) Liver tumor segmentation method based on improved U-Net
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
Fu et al. Deep‐Learning‐Based CT Imaging in the Quantitative Evaluation of Chronic Kidney Diseases
CN115619797A (en) Lung image segmentation method of parallel U-Net network based on attention mechanism
Pandey et al. Tumorous kidney segmentation in abdominal CT images using active contour and 3D-UNet
CN114581474A (en) Automatic clinical target area delineation method based on cervical cancer CT image
CN114387282A (en) Accurate automatic segmentation method and system for medical image organs
Peng et al. Lung contour detection in chest X-ray images using mask region-based convolutional neural network and adaptive closed polyline searching method
CN116645380A (en) Automatic segmentation method for esophageal cancer CT image tumor area based on two-stage progressive information fusion
CN116229067A (en) Channel attention-based liver cell cancer CT image segmentation method
CN116258732A (en) Esophageal cancer tumor target region segmentation method based on cross-modal feature fusion of PET/CT images
CN116091412A (en) Method for segmenting tumor from PET/CT image
CN115761230A (en) Spine segmentation method based on three-dimensional image
Dandıl et al. A Mask R-CNN based Approach for Automatic Lung Segmentation in Computed Tomography Scans
Chang et al. Image segmentation in 3D brachytherapy using convolutional LSTM

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23806906

Country of ref document: EP

Kind code of ref document: A1