WO2019136908A1 - 癌症识别方法、装置及存储介质 - Google Patents

癌症识别方法、装置及存储介质 Download PDF

Info

Publication number
WO2019136908A1
WO2019136908A1 PCT/CN2018/089132 CN2018089132W WO2019136908A1 WO 2019136908 A1 WO2019136908 A1 WO 2019136908A1 CN 2018089132 W CN2018089132 W CN 2018089132W WO 2019136908 A1 WO2019136908 A1 WO 2019136908A1
Authority
WO
WIPO (PCT)
Prior art keywords
cancerous
cancer
preset
pathological slice
picture
Prior art date
Application number
PCT/CN2018/089132
Other languages
English (en)
French (fr)
Inventor
王健宗
吴天博
刘莉红
刘新卉
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019136908A1 publication Critical patent/WO2019136908A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present application relates to the field of picture recognition technologies, and in particular, to a cancer recognition method, apparatus, and computer readable storage medium.
  • Cancer is one of several diseases that are difficult to cure today. According to statistics, the number of new cases in China is about 2.2 million per year, and the number of people dying from cancer is about 1.6 million.
  • the clinical manifestations of cancer vary depending on the location of the cancer and the stage of canceration. There are no obvious symptoms in the early stage of cancer. When cancer patients have specific symptoms, cancer often belongs to the advanced stage. Therefore, how to accurately and quickly detect cancerous parts of the body has become one of the most important topics in the medical profession.
  • the commonly used cancer identification method is to manually detect pathological sections.
  • patients only spend money and time on artificial pathological examination of pathological sections if they are suspected of having cancer.
  • artificial pathological examination usually takes several days, which greatly improves cancer to a certain extent. Incurable, seriously endangering the lives of patients.
  • the present application provides a cancer identification method, device and computer readable storage medium, the main purpose of which is to use a big data and artificial intelligence detection technology to quickly detect a pathological slice image, thereby improving cancer recognition efficiency.
  • the present application provides a cancer identification method, the method comprising:
  • Receiving step receiving a pathological slice picture to be recognized by the cancer
  • Determining step determining a preset type model corresponding to the pathological slice image according to a mapping relationship between the type of cancer to be identified and a preset type model;
  • the identifying step identifying the pathological slice image by using the determined preset type model to generate a recognition result.
  • the present application further provides an electronic device, including: a memory, a processor, and a memory identification program stored on the memory, where the cancer recognition program is executed by the processor, and the following steps can be implemented:
  • Receiving step receiving a pathological slice picture to be recognized by the cancer
  • Determining step determining a preset type model corresponding to the pathological slice image according to a mapping relationship between the type of cancer to be identified and a preset type model;
  • the identifying step identifying the pathological slice image by using the determined preset type model to generate a recognition result.
  • the present application further provides a computer readable storage medium including a cancer recognition program, which can implement cancer recognition as described above when executed by a processor Any step in the method.
  • the cancer identification method, the electronic device and the computer readable storage medium provided by the present application input the pathological slice image according to the mapping relationship between the cancer type to be identified and the preset type model by receiving the pathological slice image to be recognized by the cancer.
  • the preset type model is used to identify whether the patient corresponding to the picture has cancer and cancer cancer stages, increase the rate of cancer detection, and increase the success rate of cancer treatment.
  • FIG. 1 is a schematic diagram of a preferred embodiment of an electronic device of the present application.
  • FIG. 2 is a block diagram showing a preferred embodiment of the cancer recognition program of FIG. 1;
  • FIG. 3 is a flow chart of a preferred embodiment of the cancer identification method of the present application.
  • FIG. 4 is a flowchart of the preset type model training of the present application.
  • FIG. 1 is a schematic diagram of a preferred embodiment of an electronic device 1 of the present application.
  • the electronic device 1 may be a server, a smart phone, a tablet computer, a personal computer, a portable computer, and other electronic devices having computing functions.
  • the electronic device 1 includes a memory 11, a processor 12, a network interface 13, and a communication bus 14.
  • the network interface 13 can optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • Communication bus 14 is used to implement connection communication between these components.
  • the memory 11 includes at least one type of readable storage medium.
  • the at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card type memory, or the like.
  • the memory 11 may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1.
  • the memory 11 may also be an external storage unit of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (SMC), and security. Digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 11 can be used not only for storing application software and various types of data installed in the electronic device 1, such as a cancer recognition program 10, a pathological slice picture to be recognized, and a pathological slice picture of a model training. It can also be used to temporarily store data that has been output or will be output.
  • the processor 12 in some embodiments, may be a Central Processing Unit (CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing a cancer recognition program. 10 computer program code and training of preset type models, etc.
  • CPU Central Processing Unit
  • microprocessor or other data processing chip for running program code or processing data stored in the memory 11, such as performing a cancer recognition program. 10 computer program code and training of preset type models, etc.
  • the electronic device 1 may further include a display, which may be referred to as a display screen or a display unit.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an Organic Light-Emitting Diode (OLED) touch sensor.
  • the display is used to display information processed in the electronic device 1 and a work interface for displaying visualization.
  • the electronic device 1 may further include a user interface
  • the user interface may include an input unit such as a keyboard, a voice output device such as an audio, a headphone, etc.
  • the user interface may further include a standard wired interface and a wireless interface.
  • the program code of the cancer recognition program 10 is stored in the memory 11 as a computer storage medium, and when the processor 12 executes the program code of the cancer recognition program 10, the following steps are implemented:
  • Receiving step receiving a pathological slice picture to be recognized by the cancer
  • Determining step determining a preset type model corresponding to the pathological slice image according to a mapping relationship between the type of cancer to be identified and a preset type model;
  • the identifying step identifying the pathological slice image by using the determined preset type model to generate a recognition result.
  • the pathological slice picture is obtained by taking a certain size of the patient's cell tissue, staining by pathological histology to form a pathological section, and photographing under a microscope.
  • the acquisition specifications of cancerous tissues are different.
  • the patient's stomach tissue is sliced, dehydrated, and stained to obtain a photograph of the patient's stomach pathological section under the microscope.
  • a common staining method is hematoxylin-eosin staining, that is, H.E staining. Hematoxylin stains the chromatin in the nucleus into blue, and eosin stains the cytoplasm and nucleolus of the cell into red.
  • the preset type model corresponding to the pathological slice picture is determined according to the mapping relationship between the type of cancer to be identified and the preset type model. For example, when we need to detect whether a patient has gastric cancer, according to the relationship between gastric cancer and a preset type model, it is determined that the preset type model corresponding to the pathological slice image of the patient is a gastric cancer recognition model.
  • the preset type model refers to various trained cancer recognition models, and each cancer recognition model forms a one-to-one mapping relationship with each cancer. For example, a lung cancer recognition model is used to identify whether a lung tissue has cancer; a liver cancer recognition model is used to identify whether a liver tissue has cancer.
  • the pathological slice picture is identified by using the determined preset type model to generate a recognition result.
  • the recognition result includes non-cancer and cancer.
  • the recognition result is non-cancer, it indicates that the patient corresponding to the pathological slice picture does not have cancer, and continues to receive the next pathological slice picture for recognition.
  • the recognition result is determined to be cancer, it indicates that the patient corresponding to the pathological slice picture has cancer, and the prompt information of the preset format is output.
  • a gastric cancer recognition model is used to identify a pathological slice image of a patient's stomach tissue, and the recognition result is determined to be cancer, and the prompt information is output: "The patient with picture *** has gastric cancer, and it is recommended to formulate an effective treatment plan as soon as possible.”
  • the pathological slice image may also be identified by using a preset type model, and the pathological slice determined to be cancer is further determined to determine the cancerous stage of the cancer. If the cancerous stage is the first stage, the prompt information of the first preset format is output; if the cancerous stage is the second stage, the prompt information of the second preset format is output; if the cancerous stage is the third stage, the output is the third Prompt information in the default format. For example, to determine the stage of canceration of cancer, when the stage of canceration is the first stage, the message is output: “The patient with picture *** is early stage of *** cancer, it is recommended to confirm the condition as soon as possible by means of examination, and to develop an effective treatment plan as soon as possible.
  • the prompt message is output: “The patient with picture *** is in the middle of *** cancer, it is recommended to develop an effective treatment plan as soon as possible”; when the stage of cancer is the third stage, the prompt message is output. : "The patient with picture *** is in the late stage of *** cancer, it is recommended to open a green medical channel and urgently develop an effective treatment plan.”
  • the preset type model is a deep convolutional neural network model
  • the structure of the deep convolutional neural network model is as shown in Table 1.
  • the structure of the deep convolutional neural network model is to embed two sub-networks (a first feature network and a second feature network) in the main neural network.
  • the network structure of the first feature network is as shown in Table 2, and the network of the second feature network
  • the structure is as shown in Table 3.
  • the pathological slice picture is extracted through the first feature network and the second feature network, and then the feature is spliced, and then input into the main network to participate in the training.
  • Table 1 Main network structure of the deep convolutional neural network model
  • the Layer Name column indicates the name of each layer
  • Input indicates the input layer
  • Conv indicates the convolution layer
  • Conv1 indicates the first convolution layer of the model
  • MaxPool indicates the maximum pooling layer
  • MaxPool1 indicates the first maximum of the model.
  • Fc represents the fully connected layer
  • Fc1 represents the first fully connected layer in the model
  • Softmax represents the Softmax classifier
  • Batch Size represents the number of input images of the current layer
  • Kernel Size represents the scale of the current layer convolution kernel (eg The Kernel Size can be equal to 3, indicating that the scale of the convolution kernel is 3*3)
  • Stride Size indicates the moving step size of the convolution kernel, that is, the distance moved to the next convolution position after one convolution is completed
  • Pad Size indicates The size of the image fill in the current network layer
  • the filter size indicates the number of features output by the layer after the convolution or full connection operation
  • the Flatten layer represents the input of the multidimensional data into a one-dimensional vector
  • MeanStdPool indicates that this is a Mean variance pooling layer, that is, the mean and variance of the input data are calculated, and then the two ends are connected into a one-dimensional vector
  • the "first feature network and the second feature network” layer It is indicated that after the main network Max
  • the cancerous area tiles in each training set are stacked to form a 3*2048*2048 structure, and a preset type model is input.
  • the first layer of convolutional layer uses a 1*1 convolution kernel to convolve the image, then 512 filters for feature projection, output 1*2048*2048*512 image, and then use 2*2 maximum pooling layer
  • the step size is 2*2, which is used to reduce the amount of model calculation and control over-fitting.
  • the second layer of convolutional layer using a 3*3 convolution kernel, with a step size of 1*1, produces 128 features, and maintains the image size, outputting 1*1024*1024*128 images; then using 2*2 max.
  • the pooling layer has a step size of 2*2 and outputs 1*512*512*128 images.
  • the third to fifth layers are convolutional layers of the same structure, using a 3*3 convolution kernel with a step size of 1*1, resulting in 256 features.
  • the sixth layer convolutional layer uses a 3*3 convolution kernel with a step size of 1*1, which produces 256 features, and uses a 2*2 maximum pooling layer with a step size of 2*2 and an output of 1*256. *256*256 images.
  • the seventh to ninth layers are convolutional layers of the same structure, using a 3*3 convolution kernel with a step size of 1*1, resulting in 512 features.
  • the tenth layer convolutional layer uses a 3*3 convolution kernel with a step size of 1*1, which produces 512 features, and uses a 2*2 maximum pooling layer with a step size of 2*2 and an output of 1*128. *128*512 images.
  • the 11th to 13th layers are 1*1 convolutional layers, using the ReLU activation function to generate 512 features and output 1*128*128*512 images.
  • the fourteenth layer convolutional layer using a 1*1 convolution kernel, with a step size of 1*1, produces 512 features, and uses a 2*2 maximum pooling layer with a step size of 2*2 and an output of 1* 64*64*512 image.
  • the above output 1*64*64*512 image is sent to two different network structures, which are the network structure of the first feature network and the network structure of the second feature network, respectively.
  • the network structure of the first feature network is the first feature network
  • the first layer of the Flatten layer stretches the 1*64*64*512 image into a one-dimensional vector
  • the second layer is fully layered and uses the ReLU activation function to convert the input characteristics to 1*4096 output.
  • the third layer is fully connected, using the ReLU activation function to convert the input characteristics to 1*4096 output.
  • the fourth layer is fully layered and uses the ReLU activation function to convert the input features to 1*512 outputs.
  • the network structure of the second feature network is:
  • the first layer mean variance pooling layer extracts the mean and variance for each feature channel image, and splices the mean variance of each feature channel to obtain 1*1024 features.
  • the second layer is fully layered and uses the ReLU activation function to convert the input characteristics to 1*4096 output.
  • the third layer is fully connected, using the ReLU activation function to convert the input characteristics to 1*4096 output.
  • the fourth layer is fully layered and uses the ReLU activation function to convert the input features to 1*512 outputs.
  • first network structure and the two 1*512 vectors outputted by the second network structure are joined end to end into a 1*1024 vector, and the following network is input:
  • the first layer is fully layered and uses the ReLU activation function to convert the input characteristics to 1*256 outputs.
  • the second layer is fully layered and uses the ReLU activation function to convert the input characteristics to 1*256 outputs.
  • the third layer is fully layered and uses the ReLU activation function to convert the input features to 1*64 outputs.
  • the fourth layer is fully connected to the output layer, using the softmax activation function to output 1*1 results.
  • the cancer recognition method recognizes a pathological slice picture of cancer recognition according to different cancer calling different cancer recognition models, determines whether the patient corresponding to the pathological slice picture has cancer, increases the detection rate, and thereby can improve The chance of cure.
  • FIG. 2 is a block diagram of a preferred embodiment of the cancer recognition program of FIG. 1.
  • a module as referred to in this application refers to a series of computer program instructions that are capable of performing a particular function.
  • the cancer identification program 10 includes: a receiving module 110, a determining module 120, an identifying module 130, and a prompting module 140.
  • the functions or operating steps implemented by the modules 110-140 are similar to the above, and are not By way of further detail, for example, where:
  • the receiving module 110 is configured to receive a pathological slice picture to be recognized by the cancer
  • a determining module 120 configured to determine, according to a mapping relationship between the type of cancer to be identified and a preset type model, a preset type model corresponding to the pathological slice image;
  • the identification module 130 is configured to identify the pathological slice image by using the determined preset type model, and generate a recognition result
  • the prompting module 140 is configured to output a recognition result generated by the prompt information prompting model of the preset format.
  • FIG. 3 is a flow chart of a preferred embodiment of the cancer identification method of the present application.
  • the implementation of the cancer recognition method when the processor 12 executes the computer program of the cancer recognition program 10 stored in the memory 11 includes: Step S10 - Step S30:
  • the cancer recognition program 10 receives a picture of the patient's pathological section.
  • the pathological slice picture is obtained by taking a certain size of the tissue slice to be identified by the patient, and staining it by pathological histology to form a pathological slice, and taking it under a microscope.
  • the acquisition specifications of cancerous tissues are different. For example, when we need to detect whether a patient has gastric cancer, a part of gastric tissue is taken by a fiber gastroscope, and then the stomach tissue of the patient is sliced, dehydrated, and stained, thereby obtaining a photograph of the pathological section of the stomach of the patient under the microscope.
  • Common staining refers to the staining of chromatin in the nucleus by hematoxylin by H.E staining, which stains the cytoplasm and nucleolus of the cell in red.
  • the cancer recognition program 10 determines a preset type model corresponding to the pathological slice image according to the mapping relationship between the cancer type to be identified and the preset type model. For example, when we need to detect whether a patient has gastric cancer, according to the relationship between gastric cancer and a preset type model, it is determined that the preset type model corresponding to the pathological slice image of the patient is a gastric cancer recognition model.
  • the preset type model refers to various trained cancer recognition models, and each cancer recognition model forms a one-to-one mapping relationship with each cancer. For example, a rectal cancer recognition model is used to identify whether the intestine tissue has cancer; a liver cancer recognition model is used to identify whether the liver tissue has cancer.
  • Step S30 after determining the preset type model of the pathological slice picture, the cancer recognition program 10 identifies the pathological slice picture by using the determined preset type model to generate a recognition result.
  • the recognition result includes non-cancer and cancer.
  • the recognition result is non-cancer, it indicates that the patient corresponding to the pathological slice picture does not have cancer, and continues to receive the next pathological slice picture for recognition.
  • the recognition result is cancer, it indicates that the patient corresponding to the pathological slice picture has cancer, and the prompt information of the preset format is output.
  • a gastric cancer recognition model is used to identify a pathological slice image of a patient's stomach tissue, and the recognition result is determined to be cancer, and the prompt information is output: "The patient with picture *** has gastric cancer, and it is recommended to formulate an effective treatment plan as soon as possible.”
  • the pathological slice image may also be identified by using a preset type model, and the pathological slice determined to be cancer is further determined to determine the cancerous stage of the cancer. If the cancerous stage is the first stage, the prompt information of the first preset format is output; if the cancerous stage is the second stage, the prompt information of the second preset format is output; if the cancerous stage is the third stage, the output is the third Prompt information in the default format. For example, to determine the stage of canceration of cancer, when the stage of canceration is the first stage, the message is output: “The patient with picture *** is early stage of *** cancer, it is recommended to confirm the condition as soon as possible by means of examination, and to develop an effective treatment plan as soon as possible.
  • the prompt message is output: “The patient with picture *** is in the middle of *** cancer, it is recommended to develop an effective treatment plan as soon as possible”; when the stage of cancer is the third stage, the prompt message is output. : "The patient with picture *** is in the late stage of *** cancer, it is recommended to open a green medical channel and urgently develop an effective treatment plan.”
  • the preset type model is pre-built and trained. As shown in FIG. 4, it is a flowchart of the preset type model training of the present application, and the training steps of the preset type model are as follows:
  • A1 Obtain a picture of a pathological slice sample of a certain cancer of a first preset number of preset formats. For example, for the training of the gastric cancer recognition model, in 800 patients with gastric cancer and 200 patients without gastric cancer, each of them obtained 100 tiff format pathological slice images, and obtained 100,000 pathological slice sample images.
  • a coordinate axis and a cancerous marker point are established on each pathological slice sample picture, and each cancerous marker point is marked with its corresponding horizontal and vertical coordinates.
  • the horizontal and vertical coordinates of the cancerous marker point 1 are (53, 123).
  • a non-cancer marker is marked.
  • the cancerous marker point refers to a boundary point between a cancerous cell region and a normal cell region. Since the morphological structure of cancerous cells is different from that of normal cells (for example, the nucleus volume of cancerous cells is large, the number of nucleus is more than one, etc.), it is easy to distinguish between cancerous cells and normal cells in the case of staining.
  • the marked cancerous marker points form a cancerous shape curve, and the region formed by the cancerous shape curve is a region in which the cancer cells become cancerous.
  • the corresponding cancer and carcinogenesis stages are marked in the area of cell canceration.
  • a pathological slice image of 100,000 gastric cancers is labeled with a non-cancer marker or a cancerous marker, and a region where the image is cancerous is labeled "stomach cancer" and a gastric cancer stage.
  • cancerous region tiles corresponding to each pathological slice sample image are respectively identified according to a preset cancerous region determination rule. Since cancer cells are low in viscosity and metastatic, a pathological slice sample picture may have multiple cancerous area tiles.
  • cancerous region determination rule comprises:
  • the cancerous shape curve on the pathological slice sample picture is selected one by one. For example, in a pathological slice sample picture, if the pathological slice picture has a plurality of cancerous shape curves, each cancerous shape curve is selected one by one.
  • the abscissa the determined minimum abscissa is taken as the abscissa of the second side of the rectangular frame, and the determined maximum ordinate is taken as the ordinate of the third side of the rectangular frame, and the determined minimum ordinate is taken as the rectangle An ordinate of a fourth side of the frame, the position of the rectangular frame being determined by four vertices at which the first side, the second side, the third side, and the fourth side intersect, the rectangular frame enclosing
  • the picture area is the cancerous area tile.
  • the maximum abscissa of all cancerous markers on a cancerous shape curve is x 1
  • the minimum abscissa is x 2
  • the maximum ordinate y 1 is y 2
  • the minimum ordinate is y 2
  • the four vertices (x 1 , y 1 ), (x 1 , y 2 ), (x 2 , y 1 ), (x) where the four sides intersect 2 , y 2 ) is the vertices of the rectangular frame, which is the cancerous area tile.
  • the cancerous region tiles corresponding to all the pathological slice sample images are randomly divided into a first preset proportion training set and a second preset proportion verification set.
  • the cancerous region tiles corresponding to all pathological slice sample images are divided into training set and verification set according to the ratio of 8:2.
  • the training set accounts for 80% of the total cancerous area tiles, and the remaining 20% cancerous area tiles are verified.
  • the set detects the pros and cons of the model.
  • A5. Input the cancerous region tile in the training set into the model for training, generate the preset type model, and verify the generated preset type model by using the cancerous region tile in the verification set.
  • each cancerous region tile in the training set is composed of a 3*2048*2048 structure, and the model parameters are updated by one image for each iteration.
  • the blocks constituting 3*2048*2048 are input into the main neural network, and the 1*64*64*512 image is output after the main network MaxPool5.
  • the output 1*64*64*512 images are input into the first feature network and the second feature network, respectively.
  • the first feature network maps the image into a one-dimensional vector and then uses the ReLU activation function to generate a 1*512 vector.
  • the second feature network extracts the mean and variance for each feature channel image and stitches it together, then uses the ReLU activation function to generate another 1*512 vector.
  • two 1*512 vectors are spliced into 1*1024 vector and the result of the annotation is input to the main network, and the model parameters are obtained by using the ReLU activation function and the softmax activation function.
  • step A6 If the verification pass rate is greater than or equal to the preset threshold, the training is completed. If the verification pass rate is less than the preset threshold, the second preset number of sample pictures is added, and the process returns to step A3. For example, after the gastric cancer recognition model is generated, the cancerous region map in the verification set is input into the gastric cancer recognition model, and if the pass rate is greater than or equal to 98%, the training is completed. If the pass rate is less than 98%, increase the 20,000 pathological slice sample images, the flow returns to step A3, and the model parameters are adjusted until the optimal gastric cancer recognition model is trained.
  • the cancer recognition method proposed in the embodiment recognizes the pathological slice image of the cancer recognition by calling the trained preset type model, and quickly detects whether the patient corresponding to the pathological slice image has cancer, reduces the detection time, and increases the patient.
  • the success rate of curing cancer is the success rate of curing cancer.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium includes a cancer recognition program 10, and when the cancer recognition program 10 is executed by a processor, the following operations are implemented:
  • Receiving step receiving a pathological slice picture to be recognized by the cancer
  • Determining step determining a preset type model corresponding to the pathological slice image according to a mapping relationship between the type of cancer to be identified and a preset type model;
  • the identifying step identifying the pathological slice image by using the determined preset type model to generate a recognition result.
  • the preset type model includes the following training steps:
  • A1. Obtain a first predetermined number and a preset format of a pathological slice sample image of a certain cancer
  • A2 marking the cancerous marker point on each pathological slice sample picture, the cancerous marker point forming a cancerous shape curve, and labeling the cancer and cancer stage corresponding to the pathological slice sample picture;
  • cancerous region patches corresponding to each pathological slice sample image are respectively identified according to a preset cancerous region determination rule
  • the cancerous region tiles corresponding to all the pathological slice sample images are divided into a first preset proportion training set and a second preset proportion verification set;
  • A5 performing model training by using a cancerous region tile in the training set, generating the preset type model, and verifying the generated preset type model by using the cancerous region tile in the verification set;
  • step A6 If the verification pass rate is greater than or equal to the preset threshold, the training is completed. If the verification pass rate is less than the preset threshold, the second preset number of sample pictures is added, and the process returns to step A3.
  • the preset cancerous region determination rule comprises:
  • a cancerous shape curve on the pathological slice sample image is selected one by one for a pathological slice sample picture
  • the abscissa the determined minimum abscissa is taken as the abscissa of the second side of the rectangular frame, and the determined maximum ordinate is taken as the ordinate of the third side of the rectangular frame, and the determined minimum ordinate is taken as the rectangle An ordinate of a fourth side of the frame, the position of the rectangular frame being determined by four vertices at which the first side, the second side, the third side, and the fourth side intersect, the rectangular frame enclosing
  • the picture area is the cancerous area tile.
  • the preset type model is a convolutional neural network model
  • the convolutional neural network model main network structure includes a first feature network and a sub-network structure of the second feature network, wherein the pathological slice sample picture passes through the The first feature network and the second feature network extract features and perform feature splicing, and then input the main network structure to participate in the training.
  • the method further comprises:
  • the cancerous stage of the cancer is judged, and the prompt information of the preset format corresponding to the cancerous stage is output.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM as described above). , a disk, an optical disk, including a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the various embodiments of the present application.
  • a terminal device which may be a mobile phone, a computer, a server, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

本申请公开了一种癌症识别方法、装置及存储介质,该方法包括:接收待癌症识别的病理切片图片;根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。本申请通过对病理切片图片中癌变区域图块的识别,判断该图片对应的患者是否有癌症,提高癌症检测的效率。

Description

癌症识别方法、装置及存储介质
优先权申明
本申请要求于2018年01月12日提交中国专利局、申请号为201810030195.2,名称为“癌症识别方法、装置及存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合本申请中。
技术领域
本申请涉及图片识别技术领域,尤其涉及一种癌症识别方法、装置及计算机可读存储介质。
背景技术
癌症是当今医学上难以治愈的几种疾病之一。根据数据统计,中国每年新发病例约为220万,因癌症死亡的人数约为160万。癌症的临床表现因其所在的部位和癌变阶段的不同而不同,癌症早期多无明显症状,当癌症患者出现特异性症状时,癌症往往已经属于晚期了。因此,如何准确且迅速的发现身体部位癌变,已经成为医疗界最重要的课题之一。
目前,常用的癌症识别方法是通过人工对病理切片进行检测。一般而言,患者只有怀疑患有癌症的情况下,才会花费金钱和时间做病理切片的人工病理检测,而且,人工病理检测通常需要花费数天时间,这在一定程度上大幅提高了癌症的不可治愈性,严重危及了患者的生命。
发明内容
鉴于以上内容,本申请提供一种癌症识别方法、装置及计算机可读存储介质,其主要目的在于利用大数据与人工智能检测技术对病理切片图片进行快速检测,提高癌症识别效率。
为实现上述目的,本申请提供一种癌症识别方法,该方法包括:
接收步骤:接收待癌症识别的病理切片图片;
确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
此外,本申请还提供一种电子装置,该电子装置包括:存储器、处理器,所述存储器上存储癌症识别程序,所述癌症识别程序被所述处理器执行,可实现如下步骤:
接收步骤:接收待癌症识别的病理切片图片;
确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中包括癌症识别程序,所述癌症识别程序被处理器执行时,可实现如上所述癌症识别方法中的任意步骤。
本申请提出的癌症识别方法、电子装置及计算机可读存储介质,通过接收待癌症识别的病理切片图片,根据待识别的癌症类型与预设类型模型的映射关系,将该病理切片图片输入对应的预设类型模型进行识别,快速判断出该图片对应的患者是否有癌症及癌症癌变阶段,提高癌症检测的速率,增大癌症治疗成功率。
附图说明
图1为本申请电子装置较佳实施例的示意图;
图2为图1中癌症识别程序较佳实施例的模块示意图;
图3为本申请癌症识别方法较佳实施例的流程图;
图4为本申请预设类型模型训练的流程图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,是本申请电子装置1较佳实施例的示意图。
在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、个人电脑、便携计算机以及其他具有运算功能的电子设备。
该电子装置1包括:存储器11、处理器12、网络接口13及通信总线14。其中,网络接口13可选地可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线14用于实现这些组件之间的连接通信。
存储器11至少包括一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述存储器11可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述存储器11也可以是所述电子装置1的外部存储单元,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本实施例中,所述存储器11不仅可以用于存储安装于所述电子装置1的应用软件及各类数据,例如癌症识别程序10、待识别的病理切片图片和模型训练的病理切片图片,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其它数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行癌症识别程序10的计算机程序代码和预设类型模型的训练等。
优选地,该电子装置1还可以包括显示器,显示器可以称为显示屏或显示单元。在一些实施例中显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及有机发光二极管(Organic Light-Emitting Diode,OLED)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的工作界面。
优选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。
在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中存储癌症识别程序10的程序代码,处理器12执行癌症识别程序10的程序代码时,实现如下步骤:
接收步骤:接收待癌症识别的病理切片图片;
确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
本实施例中,当需要检测某位患者是否患有癌症时,接收该患者做病理切片的图片。其中,所述病理切片图片是通过取患者一定大小细胞组织,用病理组织学方法染色制成病理切片,并在显微镜下拍摄得到的。根据癌变部位、性质的不同,癌变组织的获取规范不同。例如,当我们需要检测某患者是否患有胃癌时,对患者胃部组织进行切片、脱水、染色,从而得到显微镜下该患者的胃部病理切片的照片。其中常见的染色法是苏木素-伊红染色法,即H.E染色法,苏木素将细胞核中的染色质染成蓝色,伊红将细胞的胞质和核仁染成红色。
根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型。例如,当我们需要检测患者是否患有胃癌时,根据胃癌与预设类型模型的关系,确定该患者的病理切片图片对应的预设类型模型是胃癌识别模型。其中,所述预设类型模型指各种训练好的癌症识别模型,每个癌症识别模型与每种癌症形成一一对应的映射关系。例如,肺癌识别模型用于识别肺部组织是否患有癌症;肝癌识别模型用于识别肝部组织是否患有癌症。
在确定该病理切片图片的预设类型模型之后,利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。其中,所述识别结果包括非癌症和癌症。当识别结果为非癌症时,说明该病理切片图片对应的患者没有患癌症,继续接收下一张病理切片图片进行识别。当识别结果确定为癌症时,说明该病理切片图片对应的患者患有癌症,输出预设格式的提示信息。例如,利用胃癌识别模型对患者胃部组织的病理切片图片进行识别,识别结果确定为癌症,输出提示信息:“图片***的患者有胃癌,建议尽快制定有效的治疗 方案”。
在另一个实施例中,还可以利用预设类型模型对病理切片图片进行识别,对识别结果确定为癌症的病理切片进一步判断,确定癌症的癌变阶段。若癌变阶段为第一阶段,则输出第一预设格式的提示信息;若癌变阶段为第二阶段,则输出第二预设格式的提示信息;若癌变阶段为第三阶段,则输出第三预设格式的提示信息。例如,判断癌症的癌变阶段,当癌变阶段为第一阶段时,输出提示信息:“图片***的患者为***癌早期,建议通过检验手段尽快确认病情,并尽早制定有效的治疗方案”;当癌变阶段为第二阶段时,输出提示信息:“图片***的患者为***癌中期,建议尽快制定有效的治疗方案”;当癌变阶段为第三阶段时,输出提示信息:“图片***的患者为***癌晚期,建议开通绿色医疗通道,并紧急制定有效的治疗方案”。
在本申请中,所述预设类型模型为深度卷积神经网络模型,该深度卷积神经网络模型的结构如表1所示。该深度卷积神经网络模型的结构是在主神经网络中嵌入两个子网络(第一特征网络和第二特征网络),第一特征网络的网络结构如表2所示,第二特征网络的网络结构如表3所示,所述病理切片图片分别经过第一特征网络、第二特征网络提取特征并进行特征拼接后,再输入到主网络中参与训练。
表1:该深度卷积神经网络模型主网络结构
Figure PCTCN2018089132-appb-000001
Figure PCTCN2018089132-appb-000002
表2:第一特征网络的网络结构
Layer Name Batch Size Kernel Size Stride Size Pad Size Filter Size
Input 16 N/A N/A N/A N/A
Flatten 16 N/A N/A N/A N/A
Fc1 16 N/A N/A N/A 4096
Fc2 16 N/A N/A N/A 4096
Fc3 16 N/A N/A N/A 512
表3:第二特征网络的网络结构
Layer Name Batch Size Kernel Size Stride Size Pad Size Filter Size
Input 16 N/A N/A N/A N/A
MeanStdPool 16 N/A N/A N/A N/A
Fc1 16 N/A N/A N/A 4096
Fc2 16 N/A N/A N/A 4096
Fc3 16 N/A N/A N/A 512
其中,Layer Name列表示每一层的名称,Input表示输入层,Conv表示卷积层,Conv1表示模型的第1个卷积层,MaxPool表示最大值池化层,MaxPool1表示模型的第1个最大值池化层,Fc表示全连接层,Fc1表示模型中第1个全连接层,Softmax表示Softmax分类器;Batch Size表示当前层的输入图像数目;Kernel Size表示当前层卷积核的尺度(例如,Kernel Size可以等于3,表示卷积核的尺度为3*3);Stride Size表示卷积核的移动步长,即做完一次卷积之后移动到下一个卷积位置的距离;Pad Size表示对当前网络层之中的图像填充的大小;filter Size表示该层在卷积或者全连操作后,输出的特征数量;Flatten层表示把输入的多维数据拉伸成一维向量;MeanStdPool表示这是一个均值方差池化层,即把输入数据的均值和方差计算出来,然后两者首尾相接成一维向量;“第一特征网络和第二特征网络”层表示在主网络MaxPool5之后,经MaxPool5输出的数据将分别进入两个神经网络(第一特征网络和第二特征网络)进行 计算,他们的Input输入层输入为主网络MaxPool5的输出;主网络Concatnate层是指把第一、二特征网络的输出首尾相接拼成一维向量;主网络最后一层是输出层,表示先通过Fc4全连层,采用softmax激活函数,然后输出。所述预设类型模型的运行原理如下:
首先,将每个训练集中的癌变区域图块堆叠,组成3*2048*2048结构,输入预设类型模型。
第一层卷积层,采用1*1的卷积核对图像进行卷积,接着用512个过滤器进行特征投影,输出1*2048*2048*512图像,然后采用2*2最大值池化层,步长为2*2,用于减少模型计算量,控制过拟合。
第二层卷积层,采用3*3的卷积核,步长为1*1,产生128个特征,并保持图像大小,输出1*1024*1024*128图像;然后采用2*2最大值池化层,步长为2*2,输出1*512*512*128图像。
第三层到第五层为相同结构的卷积层,采用3*3卷积核,步长为1*1,产生256个特征。
第六层卷积层,采用3*3的卷积核,步长为1*1,产生256个特征,并且采用2*2最大值池化层,步长为2*2,输出1*256*256*256图像。
第七层到第九层为相同结构的卷积层,采用3*3卷积核,步长为1*1,产生512个特征。
第十层卷积层,采用3*3的卷积核,步长为1*1,产生512个特征,并且采用2*2最大值池化层,步长为2*2,输出1*128*128*512图像。
第十一层到十三层是1*1卷积层,采用ReLU激活函数,产生512特征,输出1*128*128*512图像。
第十四层卷积层,采用1*1的卷积核,步长为1*1,产生512个特征,并且采用2*2最大值池化层,步长为2*2,输出1*64*64*512图像。
然后,将以上输出的1*64*64*512图像送进两个不同的网络结构,该两个不同的网络结构分别为第一特征网络的网络结构和第二特征网络的网络结构。
第一特征网络的网络结构:
第一层Flatten层,把1*64*64*512图像拉伸成一维向量;
第二层全连层,采用ReLU激活函数,把输入特征转化为1*4096输出。
第三层全连层,采用ReLU激活函数,把输入特征转化为1*4096输出。
第四层全连层,采用ReLU激活函数,把输入特征转化为1*512输出。
第二特征网络的网络结构是:
第一层均值方差池化层,对每个特征通道的图像提取均值和方差,把各特征通道的均值方差拼接起来,得到1*1024特征。
第二层全连层,采用ReLU激活函数,把输入特征转化为1*4096输出。
第三层全连层,采用ReLU激活函数,把输入特征转化为1*4096输出。
第四层全连层,采用ReLU激活函数,把输入特征转化为1*512输出。
最后,把第一网络结构和第二网络结构输出的两个1*512向量首尾相接拼成1*1024向量,输入以下网络:
第一层全连层,采用ReLU激活函数,把输入特征转化为1*256输出。
第二层全连层,采用ReLU激活函数,把输入特征转化为1*256输出。
第三层全连层,采用ReLU激活函数,把输入特征转化为1*64输出。
第四层全连输出层,采用softmax激活函数,输出1*1结果。
上述实施例提出的癌症识别方法,根据不同的癌症调用不同的癌症识别模型对待癌症识别的病理切片图片进行识别,判断该病理切片图片对应的患者是否患有癌症,增大检测速率,从而能够提高治愈机率。
如图2所示,是图1中癌症识别程序较佳实施例的模块示意图。本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段。
在本实施例中,癌症识别程序10包括:接收模块110、确定模块120、识别模块130、提示模块140,所述模块110-140所实现的功能或操作步骤均与上文类似,此处不再详述,示例性地,例如其中:
接收模块110,用于接收待癌症识别的病理切片图片;
确定模块120,用于根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
识别模块130,用于利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果;
提示模块140,用于输出预设格式的提示信息提示模型生成的识别结果。
如图3所示,是本申请癌症识别方法较佳实施例的流程图。
在本实施例中,处理器12执行存储器11中存储的癌症识别程序10的计算机程序时实现癌症识别方法包括:步骤S10-步骤S30:
步骤S10,癌症识别程序10接收该患者做病理切片的图片。其中,所述病理切片图片是通过取患者一定大小的待识别组织切片,用病理组织学方法染色制成病理切片,并在显微镜下拍摄得到的。根据癌变部位、性质的不同,癌变组织的获取规范不同。例如,当我们需要检测某患者是否患有胃癌时,利用纤维胃镜夹取一部分胃组织,然后对患者胃部组织进行切片、脱水、染色,从而得到显微镜下该患者的胃部病理切片的照片。常见的染色指利用H.E染色法通过苏木素将细胞核中的染色质染成蓝色,伊红将细胞的胞质和核仁染成红色。
步骤S20,癌症识别程序10根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型。例如,当我们需要检测患者是否患有胃癌时,根据胃癌与预设类型模型的关系,确定该患者的病理切片图片对应的预设类型模型是胃癌识别模型。其中,所述预设类型模型指各种训练好的癌症识别模型,每个癌症识别模型与每种癌症形成一一对应的映射关系。例如,直肠癌识别模型用于识别肠部组织是否患有癌症;肝癌识别模型用于识别肝部组织是否患有癌症。
步骤S30,在确定该病理切片图片的预设类型模型之后,癌症识别程序10利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。其中,所述识别结果包括非癌症和癌症。当识别结果为非癌症时,说明该病理切片图片对应的患者没有患癌症,继续接收下一张病理切片图片进行识别。当识别结果为癌症时,说明该病理切片图片对应的患者患有癌症,输出预设格式的提示信息。例如,利用胃癌识别模型对患者胃部组织的病理切片图片进行识别,识别结果确定为癌症,输出提示信息:“图片***的患者有胃癌,建议尽快制定有效的治疗方案”。
在另一个实施例中,还可以利用预设类型模型对病理切片图片进行识别,对识别结果确定为癌症的病理切片进一步判断,确定癌症的癌变阶段。若癌变阶段为第一阶段,则输出第一预设格式的提示信息;若癌变阶段为第二阶段,则输出第二预设格式的提示信息;若癌变阶段为第三阶段,则输出第三 预设格式的提示信息。例如,判断癌症的癌变阶段,当癌变阶段为第一阶段时,输出提示信息:“图片***的患者为***癌早期,建议通过检验手段尽快确认病情,并尽早制定有效的治疗方案”;当癌变阶段为第二阶段时,输出提示信息:“图片***的患者为***癌中期,建议尽快制定有效的治疗方案”;当癌变阶段为第三阶段时,输出提示信息:“图片***的患者为***癌晚期,建议开通绿色医疗通道,并紧急制定有效的治疗方案”。
其中,所述预设类型模型是预先构建并训练好的。如图4所示,是本申请预设类型模型训练的流程图,所述预设类型模型的训练步骤如下:
A1、获取第一预设数量预设格式的某种癌症的病理切片样本图片。例如,针对胃癌识别模型的训练,在800个病人患胃癌和200个病人未患胃癌的1000个病人中,每个人获取100个tiff格式的病理切片图片,得到10万个病理切片样本图片。
A2、在每个病理切片样本图片上建立坐标轴及标注癌变标记点,每个癌变标记点都标有其对应的横、纵坐标,如癌变标记点1的横、纵坐标为(53,123)。若该病理切片图片的细胞正常,则标注非癌症标记。其中,所述癌变标记点是指癌变细胞区域与正常细胞区域的分界点。由于癌变细胞的形态结构与正常细胞不同(如,癌变细胞的细胞核体积大,细胞核数量不止一个等),在染色的情况下很容易区分癌变细胞与正常细胞。标注的癌变标记点形成癌变形状曲线,所述癌变形状曲线形成的区域便是细胞癌变的区域。同时在细胞癌变的区域标注对应的癌症和癌变阶段。例如,对10万张胃癌的病理切片图片进行标注非癌症标记或癌变标记点,并对图片有癌变的区域的标注“胃癌”及胃癌阶段。
A3、根据各个病理切片样本图片上的癌变形状曲线,按照预设的癌变区域确定规则分别识别出各个病理切片样本图片对应的一个或多个癌变区域图块。由于癌细胞粘性低具有转移性,因此一个病理切片样本图片可能存在着多个癌变区域图块。
其中,所述癌变区域确定规则包括:
针对一个病理切片样本图片,逐一选择该病理切片样本图片上癌变形状曲线。例如,在一个病理切片样本图片中,若该病理切片图片具有多个癌变形状曲线,逐一选择每一个癌变形状曲线。
选择一个癌变形状曲线后,确定该癌变形状曲线上所有癌变标记点的最大横坐标、最小横坐标、最大纵坐标、最小纵坐标,将确定的最大横坐标作为一个矩形框的第一条边的横坐标,将确定的最小横坐标作为该矩形框的第二条边的横坐标,将确定的最大纵坐标作为该矩形框的第三条边的纵坐标,将确定的最小纵坐标作为该矩形框的第四条边的纵坐标,该矩形框的位置由所述第一条边、第二条边、第三条边及第四条边相交的四个顶点确定,该矩形框围成的图片区域即为癌变区域图块。例如,某癌变形状曲线上所有的癌变标记点中最大横坐标为x 1、最小横坐标为x 2、最大纵坐标y 1、最小纵坐标为y 2,生成的四条边分别为X=x 1、X=x 2、Y=y 1、Y=y 2,则四条边相交的四个顶点(x 1,y 1)、(x 1,y 2)、(x 2,y 1)、(x 2,y 2)为矩形框的顶点,该矩形即为癌变区域图块。
A4、将所有病理切片样本图片对应的癌变区域图块随机分为第一预设比例的训练集和第二预设比例的验证集。例如,将所有病理切片样本图片对应的癌变区域图块按照8:2的比例分成训练集和验证集,训练集占癌变区域图块总量的80%,剩余的20%癌变区域图块作为验证集对模型的优劣进行检测。
A5、将训练集中的癌变区域图块输入到模型中训练,生成所述预设类型模型,并利用验证集中的癌变区域图块对生成的所述预设类型模型进行验证。
其中,具体的过程如下:将训练集中的每个癌变区域图块组成3*2048*2048结构,每次迭代由1个图像更新模型参数。将组成3*2048*2048的图块输入到主神经网络中,经过主网络MaxPool5之后输出1*64*64*512图像。然后,将输出的1*64*64*512图像分别输入到第一特征网络和第二特征网络中。第一特征网络将图像展成一维向量,然后利用ReLU激活函数生成一个1*512向量。第二特征网络对每个特征通道的图像提取均值和方差并拼接起来,然后利用ReLU激活函数生成另一个1*512向量。最后将两个1*512向量拼接成1*1024向量与标注结果输入到主网络,利用ReLU激活函数和softmax激活函数计算,得到模型参数。
A6、若验证通过率大于或等于预设阈值,则训练完成,若验证通过率小于预设阈值,则增加第二预设数量的样本图片,流程返回步骤A3。例如生成胃癌识别模型后,将验证集中的癌变区域图块输入到胃癌识别模型中检测,如果通过率达大于或等于98%,训练完成。如果通过率小于98%,增加2万 张病理切片样本图片,流程返回步骤A3,调整模型参数直到训练出最优胃癌识别模型。
本实施例提出的癌症识别方法,通过调用训练好的预设类型模型对待癌症识别的病理切片图片进行识别,快速检测出该病理切片图片对应的患者是否患有癌症,减少检测时间,增大患者治愈癌症的成功率。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括癌症识别程序10,所述癌症识别程序10被处理器执行时实现如下操作:
接收步骤:接收待癌症识别的病理切片图片;
确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
优选地,所述预设类型模型包括以下训练步骤:
A1、获取第一预设数量、预设格式的某种癌症的病理切片样本图片;
A2、在每个病理切片样本图片上标注癌变标记点,癌变标记点形成癌变形状曲线,并标注病理切片样本图片对应的癌症和癌变阶段;
A3、根据各个病理切片样本图片上的癌变形状曲线,按照预设的癌变区域确定规则分别识别出各个病理切片样本图片对应的一个或多个癌变区域图块;
A4、将所有病理切片样本图片对应的癌变区域图块分为第一预设比例的训练集和第二预设比例的验证集;
A5、利用训练集中的癌变区域图块进行模型训练,生成所述预设类型模型,并利用验证集中的癌变区域图块对生成的所述预设类型模型进行验证;
A6、若验证通过率大于或等于预设阈值,则训练完成,若验证通过率小于预设阈值,则增加第二预设数量的样本图片,流程返回步骤A3。
优选地,所述预设的癌变区域确定规则包括:
针对一个病理切片样本图片,逐一选择该病理切片样本图片上癌变形状曲线;
选择一个癌变形状曲线后,确定该癌变形状曲线上所有癌变标记点的最大横坐标、最小横坐标、最大纵坐标、最小纵坐标,将确定的最大横坐标作为一个矩形框的第一条边的横坐标,将确定的最小横坐标作为该矩形框的第二条边的横坐标,将确定的最大纵坐标作为该矩形框的第三条边的纵坐标,将确定的最小纵坐标作为该矩形框的第四条边的纵坐标,该矩形框的位置由所述第一条边、第二条边、第三条边及第四条边相交的四个顶点确定,该矩形框围成的图片区域即为癌变区域图块。
优选地,所述预设类型模型为卷积神经网络模型,该卷积神经网络模型主网络结构包括第一特征网络、第二特征网络的子网络结构,所述病理切片样本图片分别经过所述第一特征网络、第二特征网络提取特征并进行特征拼接后,再输入主网络结构中参与训练。
优选地,该方法还包括:
若生成的识别结果确定为癌症,则判断该癌症的癌变阶段,并输出与该癌变阶段相对应的预设格式的提示信息。
本申请之计算机可读存储介质的具体实施方式与上述癌症识别方法的具体实施方式大致相同,在此不再赘述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种癌症识别方法,其特征在于,所述方法包括:
    接收步骤:接收待癌症识别的病理切片图片;
    确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
    识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
  2. 根据权利要求1所述的癌症识别方法,其特征在于,所述预设类型模型包括以下训练步骤:
    A1、获取第一预设数量、预设格式的某种癌症的病理切片样本图片;
    A2、在每个病理切片样本图片上标注癌变标记点,癌变标记点形成癌变形状曲线,并标注病理切片样本图片对应的癌症和癌变阶段;
    A3、根据各个病理切片样本图片上的癌变形状曲线,按照预设的癌变区域确定规则分别识别出各个病理切片样本图片对应的一个或多个癌变区域图块;
    A4、将所有病理切片样本图片对应的癌变区域图块分为第一预设比例的训练集和第二预设比例的验证集;
    A5、利用训练集中的癌变区域图块进行模型训练,生成所述预设类型模型,并利用验证集中的癌变区域图块对生成的所述预设类型模型进行验证;
    A6、若验证通过率大于或等于预设阈值,则训练完成,若验证通过率小于预设阈值,则增加第二预设数量的样本图片,流程返回步骤A3。
  3. 根据权利要求2所述的癌症识别方法,其特征在于,所述A2步骤还包括在每个病理切片样本图片上建立坐标轴,记录每个癌变标记点对应的横、纵坐标。
  4. 根据权利要求2所述的癌症识别方法,其特征在于,所述预设的癌变区域确定规则包括:
    针对一个病理切片样本图片,逐一选择该病理切片样本图片上癌变形状曲线;
    选择一个癌变形状曲线后,确定该癌变形状曲线上所有癌变标记点的最大横坐标、最小横坐标、最大纵坐标、最小纵坐标,将确定的最大横坐标作 为一个矩形框的第一条边的横坐标,将确定的最小横坐标作为该矩形框的第二条边的横坐标,将确定的最大纵坐标作为该矩形框的第三条边的纵坐标,将确定的最小纵坐标作为该矩形框的第四条边的纵坐标,该矩形框的位置由所述第一条边、第二条边、第三条边及第四条边相交的四个顶点确定,该矩形框围成的图片区域即为癌变区域图块。
  5. 根据权利要求1或2所述的癌症识别方法,其特征在于,所述预设类型模型为卷积神经网络模型,该卷积神经网络模型主网络结构包括第一特征网络、第二特征网络的子网络结构,所述病理切片样本图片分别经过所述第一特征网络、第二特征网络提取特征并进行特征拼接后,再输入主网络结构中参与训练。
  6. 根据权利要求1所述的癌症识别方法,其特征在于,该方法还包括:
    若生成的识别结果确定为癌症,则判断该癌症的癌变阶段,并输出与该癌变阶段相对应的预设格式的提示信息。
  7. 根据权利要求1所述的癌症识别方法,其特征在于,该方法还包括设置每个预设类型模型与每种癌症类型形成一一对应的映射关系。
  8. 一种电子装置,其特征在于,所述装置包括:存储器、处理器,所述存储器上存储有癌症识别程序,所述癌症识别程序被所述处理器执行,可实现如下步骤:
    接收步骤:接收待癌症识别的病理切片图片;
    确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
    识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
  9. 根据权利要求8所述的电子装置,其特征在于,所述预设类型模型包括以下训练步骤:
    A1、获取第一预设数量、预设格式的某种癌症的病理切片样本图片;
    A2、在每个病理切片样本图片上标注癌变标记点,癌变标记点形成癌变形状曲线,并标注病理切片样本图片对应的癌症和癌变阶段;
    A3、根据各个病理切片样本图片上的癌变形状曲线,按照预设的癌变区域确定规则分别识别出各个病理切片样本图片对应的一个或多个癌变区域图 块;
    A4、将所有病理切片样本图片对应的癌变区域图块分为第一预设比例的训练集和第二预设比例的验证集;
    A5、利用训练集中的癌变区域图块进行模型训练,生成所述预设类型模型,并利用验证集中的癌变区域图块对生成的所述预设类型模型进行验证;
    A6、若验证通过率大于或等于预设阈值,则训练完成,若验证通过率小于预设阈值,则增加第二预设数量的样本图片,流程返回步骤A3。
  10. 根据权利要求9所述的电子装置,其特征在于,所述A2步骤还包括在每个病理切片样本图片上建立坐标轴,记录每个癌变标记点对应的横、纵坐标。
  11. 根据权利要求9所述的电子装置,其特征在于,所述预设的癌变区域确定规则包括:
    针对一个病理切片样本图片,逐一选择该病理切片样本图片上癌变形状曲线;
    选择一个癌变形状曲线后,确定该癌变形状曲线上所有癌变标记点的最大横坐标、最小横坐标、最大纵坐标、最小纵坐标,将确定的最大横坐标作为一个矩形框的第一条边的横坐标,将确定的最小横坐标作为该矩形框的第二条边的横坐标,将确定的最大纵坐标作为该矩形框的第三条边的纵坐标,将确定的最小纵坐标作为该矩形框的第四条边的纵坐标,该矩形框的位置由所述第一条边、第二条边、第三条边及第四条边相交的四个顶点确定,该矩形框围成的图片区域即为癌变区域图块。
  12. 根据权利要求8或9所述的电子装置,其特征在于,所述预设类型模型为卷积神经网络模型,该卷积神经网络模型主网络结构包括第一特征网络、第二特征网络的子网络结构,所述病理切片样本图片分别经过所述第一特征网络、第二特征网络提取特征并进行特征拼接后,再输入主网络结构中参与训练。
  13. 根据权利要求8所述的电子装置,其特征在于,所述癌症识别程序被所述处理器执行时,还实现如下步骤:
    若生成的识别结果确定为癌症,则判断该癌症的癌变阶段,并输出与该癌变阶段相对应的预设格式的提示信息。
  14. 根据权利要求8所述的电子装置,其特征在于,所述癌症识别程序被所述处理器执行时,还实现如下步骤:设置每个预设类型模型与每种癌症类型形成一一对应的映射关系。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括癌症识别程序,所述统癌症识别程序被处理器执行时,可实现如下步骤:
    接收步骤:接收待癌症识别的病理切片图片;
    确定步骤:根据待识别的癌症类型与预设类型模型的映射关系,确定该病理切片图片对应的预设类型模型;
    识别步骤:利用确定的预设类型模型对该病理切片图片进行识别,生成识别结果。
  16. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述预设类型模型包括以下训练步骤:
    A1、获取第一预设数量、预设格式的某种癌症的病理切片样本图片;
    A2、在每个病理切片样本图片上标注癌变标记点,癌变标记点形成癌变形状曲线,并标注病理切片样本图片对应的癌症和癌变阶段;
    A3、根据各个病理切片样本图片上的癌变形状曲线,按照预设的癌变区域确定规则分别识别出各个病理切片样本图片对应的一个或多个癌变区域图块;
    A4、将所有病理切片样本图片对应的癌变区域图块分为第一预设比例的训练集和第二预设比例的验证集;
    A5、利用训练集中的癌变区域图块进行模型训练,生成所述预设类型模型,并利用验证集中的癌变区域图块对生成的所述预设类型模型进行验证;
    A6、若验证通过率大于或等于预设阈值,则训练完成,若验证通过率小于预设阈值,则增加第二预设数量的样本图片,流程返回步骤A3。
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述A2步骤还包括在每个病理切片样本图片上建立坐标轴,记录每个癌变标记点对应的横、纵坐标。
  18. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述预设的癌变区域确定规则包括:
    针对一个病理切片样本图片,逐一选择该病理切片样本图片上癌变形状曲线;
    选择一个癌变形状曲线后,确定该癌变形状曲线上所有癌变标记点的最大横坐标、最小横坐标、最大纵坐标、最小纵坐标,将确定的最大横坐标作为一个矩形框的第一条边的横坐标,将确定的最小横坐标作为该矩形框的第二条边的横坐标,将确定的最大纵坐标作为该矩形框的第三条边的纵坐标,将确定的最小纵坐标作为该矩形框的第四条边的纵坐标,该矩形框的位置由所述第一条边、第二条边、第三条边及第四条边相交的四个顶点确定,该矩形框围成的图片区域即为癌变区域图块。
  19. 根据权利要求15或16所述的计算机可读存储介质,其特征在于,所述预设类型模型为卷积神经网络模型,该卷积神经网络模型主网络结构包括第一特征网络、第二特征网络的子网络结构,所述病理切片样本图片分别经过所述第一特征网络、第二特征网络提取特征并进行特征拼接后,再输入主网络结构中参与训练。
  20. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述癌症识别程序被所述处理器执行时,还实现如下步骤:
    若生成的识别结果确定为癌症,则判断该癌症的癌变阶段,并输出与该癌变阶段相对应的预设格式的提示信息。
PCT/CN2018/089132 2018-01-12 2018-05-31 癌症识别方法、装置及存储介质 WO2019136908A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810030195.2 2018-01-12
CN201810030195.2A CN108154509B (zh) 2018-01-12 2018-01-12 癌症识别方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2019136908A1 true WO2019136908A1 (zh) 2019-07-18

Family

ID=62461461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/089132 WO2019136908A1 (zh) 2018-01-12 2018-05-31 癌症识别方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN108154509B (zh)
WO (1) WO2019136908A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114271763A (zh) * 2021-12-20 2022-04-05 合肥中纳医学仪器有限公司 一种基于Mask RCNN的胃癌早期识别方法、系统、装置
CN115063739A (zh) * 2022-06-10 2022-09-16 嘉洋智慧安全生产科技发展(北京)有限公司 异常行为的检测方法、装置、设备及计算机存储介质
CN115619634A (zh) * 2022-09-06 2023-01-17 广州医科大学附属第一医院(广州呼吸中心) 基于病理切片关联的病理图像拼接方法及装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674831B (zh) * 2018-06-14 2023-01-06 佛山市顺德区美的电热电器制造有限公司 一种数据处理方法、装置及计算机可读存储介质
CN108875693B (zh) * 2018-07-03 2021-08-10 北京旷视科技有限公司 一种图像处理方法、装置、电子设备及其存储介质
CN109118485A (zh) * 2018-08-13 2019-01-01 复旦大学 基于多任务神经网络的消化道内镜图像分类及早癌检测系统
CN109360656B (zh) * 2018-08-20 2021-11-02 安徽大学 一种基于多目标演化算法的癌症检测方法
CN109215788B (zh) * 2018-08-22 2022-01-18 四川大学 一种口腔黏膜病损癌变危险程度的预测方法及装置
CN115063403A (zh) * 2022-07-27 2022-09-16 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) 三级淋巴结构的识别方法、装置及设备
CN117831030A (zh) * 2023-11-15 2024-04-05 中康智慧(上海)生命科技有限公司 基于多模态的癌症早期器官病变症状智能识别方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017046796A1 (en) * 2015-09-14 2017-03-23 Real Imaging Ltd. Image data correction based on different viewpoints
CN106991439A (zh) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 基于深度学习与迁移学习的图像识别方法
CN107203778A (zh) * 2017-05-05 2017-09-26 平安科技(深圳)有限公司 视网膜病变程度等级检测系统及方法

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101366059A (zh) * 2005-12-29 2009-02-11 卡尔斯特里姆保健公司 用于多个器官系统的cad检测系统
WO2010082101A1 (en) * 2009-01-19 2010-07-22 Koninklijke Philips Electronics, N.V. Regional reconstruction and quantitative assessment in list mode pet imaging
ES2388413B1 (es) * 2010-07-01 2013-08-22 Telefónica, S.A. Método para la clasificación de videos.
CN103679685B (zh) * 2012-09-11 2018-03-27 北京三星通信技术研究有限公司 图像处理系统和图像处理方法
WO2017021919A1 (en) * 2015-08-06 2017-02-09 Tel Hashomer Medical Research, Infrastructure And Services Ltd. Mamography apparatus
CN105956620A (zh) * 2016-04-29 2016-09-21 华南理工大学 一种基于稀疏表示的肝脏超声图像识别方法
CN106055576B (zh) * 2016-05-20 2018-04-10 大连理工大学 一种大规模数据背景下的快速有效的图像检索方法
CN106372648B (zh) * 2016-10-20 2020-03-13 中国海洋大学 基于多特征融合卷积神经网络的浮游生物图像分类方法
CN106570505B (zh) * 2016-11-01 2020-08-21 北京昆仑医云科技有限公司 对组织病理图像进行分析的方法和系统
CN106709907A (zh) * 2016-12-08 2017-05-24 上海联影医疗科技有限公司 Mr图像的处理方法及装置
CN107292312B (zh) * 2017-06-19 2021-06-22 中国科学院苏州生物医学工程技术研究所 肿瘤ct图像处理方法
CN107463964A (zh) * 2017-08-15 2017-12-12 山东师范大学 一种基于超声图像特征相关性的乳腺肿瘤分类方法、装置
CN107526799B (zh) * 2017-08-18 2021-01-08 武汉红茶数据技术有限公司 一种基于深度学习的知识图谱构建方法
CN107563997B (zh) * 2017-08-24 2020-06-02 京东方科技集团股份有限公司 一种皮肤病诊断系统、构建方法、分类方法和诊断装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017046796A1 (en) * 2015-09-14 2017-03-23 Real Imaging Ltd. Image data correction based on different viewpoints
CN106991439A (zh) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 基于深度学习与迁移学习的图像识别方法
CN107203778A (zh) * 2017-05-05 2017-09-26 平安科技(深圳)有限公司 视网膜病变程度等级检测系统及方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114271763A (zh) * 2021-12-20 2022-04-05 合肥中纳医学仪器有限公司 一种基于Mask RCNN的胃癌早期识别方法、系统、装置
CN114271763B (zh) * 2021-12-20 2024-05-28 合肥中纳医学仪器有限公司 一种基于Mask RCNN的胃癌早期识别方法、系统、装置
CN115063739A (zh) * 2022-06-10 2022-09-16 嘉洋智慧安全生产科技发展(北京)有限公司 异常行为的检测方法、装置、设备及计算机存储介质
CN115619634A (zh) * 2022-09-06 2023-01-17 广州医科大学附属第一医院(广州呼吸中心) 基于病理切片关联的病理图像拼接方法及装置
CN115619634B (zh) * 2022-09-06 2023-06-20 广州医科大学附属第一医院(广州呼吸中心) 基于病理切片关联的病理图像拼接方法及装置

Also Published As

Publication number Publication date
CN108154509A (zh) 2018-06-12
CN108154509B (zh) 2022-11-11

Similar Documents

Publication Publication Date Title
WO2019136908A1 (zh) 癌症识别方法、装置及存储介质
WO2019223146A1 (zh) 胃癌识别方法、装置及存储介质
AU2018394106B2 (en) Processing of histology images with a convolutional neural network to identify tumors
WO2022134337A1 (zh) 人脸遮挡检测方法、系统、设备及存储介质
WO2021189912A1 (zh) 图像中目标物的检测方法、装置、电子设备及存储介质
WO2018201647A1 (zh) 视网膜病变程度等级检测方法、装置及存储介质
US10235603B2 (en) Method, device and computer-readable medium for sensitive picture recognition
WO2019223147A1 (zh) 肝脏癌变定位方法、装置及存储介质
WO2022001623A1 (zh) 基于人工智能的图像处理方法、装置、设备及存储介质
WO2020082577A1 (zh) 印章防伪检验方法、装置及计算机可读存储介质
WO2019071662A1 (zh) 电子装置、票据信息识别方法和计算机可读存储介质
US11354797B2 (en) Method, device, and system for testing an image
CN109978063B (zh) 一种生成目标对象的对齐模型的方法
WO2019071660A1 (zh) 票据信息识别方法、电子装置及可读存储介质
WO2018090641A1 (zh) 识别保险单号码的方法、装置、设备及计算机可读存储介质
WO2020248848A1 (zh) 智能化异常细胞判断方法、装置及计算机可读存储介质
CN110059697A (zh) 一种基于深度学习的肺结节自动分割方法
WO2020259453A1 (zh) 3d图像的分类方法、装置、设备及存储介质
WO2020253508A1 (zh) 异常细胞检测方法、装置及计算机可读存储介质
WO2021151338A1 (zh) 医学影像图片分析方法、装置、电子设备及可读存储介质
WO2021184847A1 (zh) 一种遮挡车牌字符识别方法、装置、存储介质和智能设备
WO2021073120A1 (zh) 医学影像的肺部区域阴影标记方法、装置、服务器及存储介质
CN109741338A (zh) 一种人脸分割方法、装置及设备
WO2021147221A1 (zh) 文本识别方法、装置、电子设备及存储介质
CN112132812B (zh) 证件校验方法、装置、电子设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18899636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18899636

Country of ref document: EP

Kind code of ref document: A1