WO2021238438A1 - Tumor image processing method and apparatus, electronic device, and storage medium - Google Patents

Tumor image processing method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2021238438A1
WO2021238438A1 PCT/CN2021/086139 CN2021086139W WO2021238438A1 WO 2021238438 A1 WO2021238438 A1 WO 2021238438A1 CN 2021086139 W CN2021086139 W CN 2021086139W WO 2021238438 A1 WO2021238438 A1 WO 2021238438A1
Authority
WO
WIPO (PCT)
Prior art keywords
target organ
data
tumor
image
original image
Prior art date
Application number
PCT/CN2021/086139
Other languages
French (fr)
Chinese (zh)
Inventor
王斯凡
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2021238438A1 publication Critical patent/WO2021238438A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present invention relates to the technical field of medical image processing, in particular to a method and device for processing tumor images, electronic equipment, and storage media.
  • CT scans for tumor inspections.
  • a CT scan produces at least DICOM data of more than 500M in size.
  • DICOM data with more than 300 tomographic images requires doctors to check them one by one, which takes more than 30 minutes. If the diagnosis is made, the imaging surgeon needs to perform three-dimensional reconstruction to assist the clinical doctor in surgical planning, which usually takes more than 60 minutes. Doctors are labor intensive and inefficient.
  • image processing is gradually applied to the processing of medical images to reduce the time it takes for doctors to check one by one.
  • the embodiment of the present disclosure proposes a method for processing a tumor image, which includes the following steps: obtaining an original image of a target organ; performing rough recognition on the original image to obtain regional data of the area where the target organ is located from the original image; Perform precise identification of the regional data, obtain the first data of the target organ and the second data of the tumor on the target organ; according to the first data and the second data, the target organ and The tumor on the target organ is delineated in two dimensions and/or reconstructed in three dimensions.
  • the performing rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image includes: inputting the original image into a U-shape with a three-layer step connection Network structure to obtain the first image; perform three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinate of each position point; determine the area data of the target organ according to the space coordinate.
  • the determining the region data of the target organ according to the spatial coordinates includes: performing a cluster analysis on the spatial coordinates to obtain the center of gravity position corresponding to each position point in the first image ; Obtain the first distance between each of the position points and the center of gravity position; identify the maximum value in the first distance, and use the maximum value as the shear radius; Clipping is performed at the center of gravity, so that the clipped area is used as the area data of the target organ.
  • the identifying the maximum value in the first distance and using the maximum value as a shearing radius includes: obtaining a redundancy of the shearing radius, and combining the maximum value and the shearing radius. The sum of the redundancy amounts is used as the shear radius.
  • the precise identification of the region data to obtain the first data of the target organ and the second data of the tumor on the target organ includes: obtaining a region image corresponding to the region data Input the regional image into a 3DU network structure with a sparse connection module and a multi-level residual module; use the 3DU network structure to finely identify the regional image, and obtain the 3DU network structure for precision Recognized second image; analyzing the second image to obtain the first data of the target organ and the second data of the tumor on the target organ.
  • the acquiring the second image that has been finely identified through the 3DU-type network structure includes: using the 3DU-type network structure to up-sample the area data to acquire a first feature map;
  • the first feature map is input to the sparse connection module to obtain a second feature map;
  • the second feature map is input to the multi-level residual module to obtain a third feature map;
  • the third feature map is performed Up-sampling to obtain the second image.
  • the sparse connection module includes four cascaded branches.
  • the multi-level residual module adopts three-scale pooling operations.
  • the performing a two-dimensional delineation of the target organ and the tumor on the target organ according to the first data and the second data includes: obtaining a selection instruction for the target organ , Wherein the selection instruction includes a selection position of the target organ; according to the selection instruction, a target original image corresponding to the selected position is extracted from the original image; according to the first data and the The second data is a two-dimensional delineation of the target organ and the tumor on the target organ.
  • the selection instruction further includes an image angle
  • the extraction of the target original image corresponding to the selected position from the original image according to the selection instruction includes: according to the selection instruction, Extract a target original image corresponding to the selected position and conforming to the image angle from the original image.
  • the target organ and its upper tumor can be obtained with higher precision, and the target organ and its upper tumor can be obtained in two dimensions.
  • Delineation and/or three-dimensional reconstruction enable the doctor to more accurately determine the location, size and shape of the target organ and the tumor on it based on the identification image processed by this application, which is convenient for diagnosis and treatment.
  • the embodiment of the present disclosure proposes a tumor image processing device, which includes: an acquisition module for acquiring an original image taken for a target organ; a first recognition module for performing rough recognition on the original image to obtain information from the Obtain the area data of the area where the target organ is located from the original image; the second recognition module is used to accurately recognize the area data, and obtain the first data of the target organ and the second data of the tumor on the target organ;
  • the identification module is configured to perform two-dimensional delineation and/or three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data.
  • the embodiment of the present disclosure proposes an electronic device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the above-mentioned tumor image processing method when the program is executed.
  • the embodiment of the present disclosure proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the above-mentioned tumor image processing method is realized.
  • FIG. 1 is a flowchart of a method for processing a tumor image provided by an embodiment of the present invention
  • FIG. 2 is a flowchart of another tumor image processing method provided by an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a simplified Unet model provided by an embodiment of the present invention.
  • FIG. 4 is a flowchart of another method for processing tumor images provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of the effect after the simplified Unet model processing provided by the embodiment of the present invention.
  • FIG. 6 is a flowchart of another method for processing a tumor image provided by an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a 3DU network structure with a sparse connection module and a multi-level residual module provided by an embodiment of the present invention
  • FIG. 8 is a flowchart of another tumor image processing method provided by an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a sparse connection module provided by an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of a multi-level residual module provided by an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the principle of a training sample set according to an embodiment of the present invention.
  • FIG. 12 is a flowchart of yet another tumor image processing method provided by an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a specific two-dimensional delineation of a target organ and its tumor according to an embodiment of the present invention.
  • FIG. 14 is another schematic diagram of specific two-dimensional delineation and three-dimensional reconstruction of a target organ and its tumor according to an embodiment of the present invention.
  • Fig. 15 is a schematic block diagram of a tumor image processing apparatus provided by an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a method for processing a tumor image provided by an embodiment of the present invention. As shown in Fig. 1, the method for processing a tumor image according to an embodiment of the present invention includes the following steps:
  • the original image is, for example, a CT (Computed Tomography) image taken of a patient’s target organ.
  • a CT scan generates at least DICOM (Digital Imaging and Communications in Medicine) data with a size of more than 500M. Each case has more than 300 tomographic images.
  • S102 Perform rough recognition on the original image to obtain area data of the area where the target organ is located from the original image.
  • S103 Perform precise recognition on the area data, and obtain the first data of the target organ and the second data of the tumor on the target organ.
  • the first data may include information such as the location and size of the target organ
  • the second information may include information such as the location, size, and shape of the tumor.
  • S104 Perform two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
  • the tumor image processing method proposed in this application can obtain the target organ and its tumor with higher accuracy by recognizing the original image twice, firstly coarse and then finely.
  • Performing two-dimensional delineation and/or three-dimensional reconstruction enables the doctor to more accurately determine the location, size, and shape of the target organ and the tumor on it according to the identification image processed by this application, which is convenient for diagnosis and treatment.
  • performing rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image includes:
  • S201 Input the original image into a U-shaped network structure with three-layer step connections to obtain a first image.
  • the U-shaped structure network in the example of this application is, for example, a simplified Unet model, where the Unet model has the characteristics of simple structure and close combination of deep and shallow semantics.
  • This application optimizes the Unet model, as shown in Figure 3. Removing the fourth-layer connection in the traditional Unet model structure and adjusting it to a three-layer step connection can reduce the amount of parameters by 1/2 of the model and effectively improve the operating efficiency.
  • this application further processes the images obtained through the U-shaped network structure to determine the area data of the area where the target organ is located, that is, the input data for subsequent precision recognition is the resection irrelevant information and only contains the area where the target organ is located. Data, effectively reducing the amount of data processing for precise identification.
  • the input data for fine recognition can be regional data that only includes the left kidney and the right kidney.
  • S202 Perform three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinate of each position point.
  • S203 Determine the area data of the target organ according to the space coordinates.
  • determining the area data of the target organ according to the spatial coordinates includes:
  • S301 Perform cluster analysis on the spatial coordinates, and obtain the center of gravity position corresponding to each position point in the first image.
  • S304 Cut the position of the center of gravity according to the cutting radius, so that the cut area is used as the area data of the target organ.
  • the elasticity scale that is, the redundancy of the shearing radius
  • the redundancy can be The distance between the center of gravity and this point.
  • the present application can extract the region data of the target organ through rough recognition, so as to reduce the amount of data processing for the subsequent fine recognition process, and effectively improve the accuracy and operation speed of image processing.
  • performing precise recognition on the area data to generate a target organ image containing tumor data includes:
  • S401 Acquire an area image corresponding to the area data.
  • the regional data can be data blocks containing lesions and tumor information obtained through rough recognition. Since the 3DU network is an image processing network, the data blocks need to be pictured for precise recognition.
  • S402 Input the regional image into a 3DU network structure with a sparse connection module and a multi-level residual module.
  • the 3DU network structure with sparse connection module (S module) and multi-level residual module (P module) proposed in this application can be expressed as the SP-3DUnet model, that is, the sparse connection module is added to the 3DUnet model (S module) and a multi-level residual module (P module).
  • this application adds a sparse connection module (S module) and a multi-level residual module (P module) to the bottom layer of the 3DUnet model, that is, the 3DUnet model After the down-sampling and before the up-sampling.
  • this application also optimizes the 3DUnet model, that is, each convolutional layer removes a convolution operation on the basis of the original model to reduce 1/2 parameters and reduce the model volume. However, this operation will reduce the model accordingly.
  • This application further adds a sparse connection module (S module) and a multi-level residual module (P module) to the bottom layer of the optimized 3DU model, as shown in Figure 7, to resist semantics Loss of expression ability, this operation can improve the accuracy of algorithm recognition and operating efficiency under the premise of ensuring that only 1/5 of the parameter amount is increased.
  • S403 Use the 3DU network structure to perform fine recognition on the regional image, and obtain a second image that has been finely recognized by the 3DU network structure.
  • step S403 acquiring a second image that has been finely recognized through a 3DU-type network structure includes:
  • S4031 Up-sampling the regional data using the 3DU network structure to obtain the first feature map.
  • S4032 Input the first feature map into the sparse connection module to obtain a second feature map.
  • this application proposes a sparse connection based on hole convolution to encode high-level semantic feature mapping, where the hole convolution is stacked in a cascaded manner.
  • the sparse connection module is divided into four cascaded branches.
  • 1*1*1 and 3* can be used.
  • 3*3 two kinds of hollow convolution, corresponding to the receptive field of each branch are 3*3*3, 7*7*7 and 9*9*9.
  • the sparse connection module when used, the convolution of the large receptive field can extract and generate more abstract features for the large target, and the convolution of the small receptive field can extract and generate more abstract features for the small target.
  • the sparse connection module by combining hole convolutions with different expansion rates, the sparse connection module can extract features with various size targets, establish a sparse connection mode of underlying key semantic features, and offset the semantic information description loss caused by reducing the convolution operation.
  • S4033 Input the second feature map to the multi-level residual module to obtain a third feature map.
  • This application proposes to use a multi-level residual module to enhance the algorithm's ability to recognize and generalize multi-size targets.
  • the multi-level residual module uses pooling operations of different scales to extract feature information of different scales and improve the ability to recognize targets of different sizes.
  • a three-scale pooling operation is adopted, and the three branches output three-scale feature domains.
  • 1*1*1 convolution can also be used after each pooling branch, which can reduce the size of the feature domain to 1/N of the original size, where N represents the original feature domain The number of channels in.
  • S4034 Up-sampling the third feature map to obtain a second image.
  • the third feature map reduced to 1/N of the original size is subjected to bilinear interpolation to obtain the same size feature as the original feature map.
  • S404 Analyze the second image to obtain the first data including the target organ and the second data including the tumor on the target organ.
  • the image after the precise recognition of the image through the 3DU network structure with the sparse connection module and the multi-level residual module is a characteristic image, which needs to be analyzed to obtain the first data containing the target organ and the target organ
  • the second data on the tumor For example, the position of the target organ can be marked as 1, the position of the tumor on the target organ is marked as 2, and the other positions are marked as 0, so that the position set of the target organ is the first data of the target organ, which can express the target The position and size of the organ, similarly, the position of the tumor on the target organ is set as the second data of the tumor on the target organ.
  • the simplified U-shaped network structure and the 3DU-type network structure with sparse connection modules and multi-level residual modules used in the above-mentioned rough recognition and fine recognition need to be trained in deep learning, so that they can be used accurately. Identify the first data of the target organ and the second data of the tumor on the target organ.
  • training sample sets multiple original images that have been manually marked with target organs and tumor locations can be input as training sample sets.
  • it can also be used to manually mark the training samples
  • the training sample set is expanded by inversion, translation, rotation, deformation, etc.
  • N is the label of the number of pixels
  • p(k,i) ⁇ [0,1] and g(k,i) ⁇ 0,1 ⁇ represent the predicted probability and true label of class k, respectively
  • k is the category and the weight, which can be set in the embodiment of the present application.
  • the accuracy of image recognition can also be verified by the doctor’s hand examination. If there is a recognition error, use the pen function to correct it, and replace the original recognition result with the correction result. For example, the correction result is automatically sent back to the cloud training database to update the training sample set to retrain the model, as shown in Figure 11.
  • the two-dimensional delineation of the target organ and the tumor on the target organ according to the first data and the second data includes:
  • S501 Obtain a selection instruction for a target organ.
  • the selection instruction includes the selection position of the target organ.
  • the selection instruction may further include an image angle, and therefore, a target original image corresponding to the selected position and conforming to the image angle needs to be extracted from the original image.
  • S503 Perform a two-dimensional delineation of the target organ and the tumor on the target organ on the original image according to the first data and the second data.
  • the target organ and the tumor on the target organ can be outlined in two dimensions on the original image based on the first data and the second data, so that doctors and patients can clearly know the condition of the lesion .
  • it can help doctors focus on identifying pictures containing kidneys when performing medical diagnosis based on CT images, effectively improving the efficiency of diagnosis and treatment, achieving the purpose of precise diagnosis and treatment, and preventing missed diagnosis.
  • the present application can effectively improve the visualization of doctor-patient communication by performing three-dimensional reconstruction of the target organ and its tumor.
  • the target organ selection instruction can be obtained, and the position and angle selected by the doctor and/or patient can be extracted from the selection instruction, and the tomographic image corresponding to the position and angle can be extracted from the original image according to the position and angle, and then the image can be judged Whether each position point in the data belongs to the first data and/or the second data, if it is, it is identified according to the first data and/or the second data, and if it is not, no operation is performed.
  • the left image is a tomographic image of the kidney scanned from a bird's-eye view angle when the human body is upright.
  • the image is the image corresponding to the 286th layer of the 520 tomographic images
  • the right image is The image after marking the kidney and kidney tumor in the left image.
  • the image angle may also include the side view angle and the back view angle when the human body is upright, as shown in FIG. 14, where the image in the upper left corner in FIG.
  • the image in the corner is a tomographic image of the human body upright from the top angle
  • the image in the lower right corner is the tomographic image of the human body upright in the rear view angle
  • the image in the lower left corner is the three-dimensional model of the target organ and its upper tumor.
  • the target organ and the tumor on the target organ are 3D modeled according to the first data and the second data to obtain a 3D model of the target organ and the tumor on the target organ, such as As shown in Figure 14, the corresponding identification data is extracted from the three-dimensional model according to the selection instruction and a two-dimensional outline is performed on the original image.
  • the embodiment of the present application can also perform three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data, and directly reconstruct the three-dimensionally reconstructed tumor. Stereoscopic images are displayed.
  • the doctor can accurately obtain the position and shape of the tumor at the target organ through the model result, so that it is convenient for the doctor to perform medical planning, such as surgery, based on the three-dimensional reconstruction model. Planning, radiotherapy planning and chemotherapy planning, etc.
  • the tumor position determined by the method proposed in this application can be used to help doctors find precise positioning, reduce operation time, and improve the quality of the operation. Moreover, due to the accurate positioning, the dislocation of the surgical incision can be avoided, thereby reducing The size of the surgical knife edge accelerates the preset speed of the patient’s wound and reduces the patient’s pain; during radiotherapy planning, the tumor size and location can be determined by the method proposed in this application to help doctors plan the intensity of radiotherapy rays, thereby reducing the impact of radiotherapy on normal parts During chemotherapy planning, the location, size, and shape of the tumor determined by the method proposed in this application can help doctors plan the dosage of the drug, reduce the impact on the patient’s normal cells, and thereby alleviate the patient’s pain in treatment.
  • the tumor image processing method proposed in the present application recognizes the original image twice, firstly coarse and then finely, so as to obtain high-precision target organs and tumors on the target organs.
  • the tumor is delineated in two dimensions and/or reconstructed in three dimensions, so that the doctor can more accurately determine the position, size, and shape of the target organ according to the target organ image processed by the application, and facilitate diagnosis and treatment.
  • the present invention also provides a tumor image processing device.
  • FIG. 15 is a schematic block diagram of a tumor image processing apparatus provided by an embodiment of the present invention.
  • the apparatus 10 for processing tumor images includes: an acquisition module 11, a first identification module 12, a second identification module 13 and an identification module 14.
  • the acquisition module 11 is used to acquire the original image taken for the target organ.
  • the first recognition module 12 is configured to perform rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image.
  • the second recognition module 13 is configured to perform precise recognition on the area data, and obtain the first data of the target organ and the second data of the tumor on the target organ.
  • the identification module 14 is configured to perform two-dimensional delineation and three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data.
  • the first recognition module 12 is specifically configured to: input the original image into a U-shaped network structure with three layers of step connections to obtain a first image; Coordinate mapping to obtain the space coordinates of each position point; and determine the area data of the target organ according to the space coordinates.
  • the first identification module 12 is specifically configured to: perform a cluster analysis on the spatial coordinates to obtain the position of the center of gravity corresponding to the position point in the first image; and obtain the position of the center of gravity and each position point The first distance between the positions; identify the maximum value in the first distance, and use the maximum value as the shear radius; shear the center of gravity position according to the shear radius, so that after the shear The area of is used as the area data of the target organ.
  • the first identification module 12 is specifically configured to: obtain the redundancy of the shearing radius, and use the sum of the maximum value and the redundancy as the shearing radius.
  • the second recognition module 13 is specifically configured to: obtain the area image corresponding to the area data; input the area image into a 3DU network structure with a sparse connection module and a multi-level residual module; use the 3DU
  • the type network structure performs precise recognition on the regional image, and obtains a second image that has been refined through the 3DU type network structure; and analyzes the second image to generate the target organ image.
  • the second identification module 13 is specifically configured to: use the 3DU network structure to up-sample the area data to obtain a first feature map; and input the first feature map into the sparse connection module to Obtain a second feature map; input the second feature map to the multi-level residual module to obtain a third feature map; perform up-sampling on the third feature map to obtain the second image.
  • the sparse connection module includes four cascaded branches.
  • the multi-level residual module adopts three-scale pooling operations.
  • the identification module 14 is further configured to: obtain a selection instruction for the target organ, wherein the selection instruction includes a selection position of the target organ; and extract from the original image according to the selection instruction The target original image corresponding to the selected position; according to the first data and the second data, two-dimensional delineation and three-dimensional reconstruction of the target organ and the tumor on the target organ are performed.
  • the identification module 14 is further configured to: according to the selection instruction, extract a target original image corresponding to the selected position and conforming to the image angle from the original image.
  • the embodiments of the present invention also provide an electronic device, including a memory, a processor, and a computer program stored on the memory and running on the processor.
  • the processor executes the program, the foregoing The processing method of the tumor image.
  • the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the aforementioned tumor image processing method is realized.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present invention, "a plurality of” means at least two, such as two, three, etc., unless otherwise specifically defined.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices.
  • computer-readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because it can be used, for example, by optically scanning the paper or other medium, and then editing, interpreting, or other suitable media if necessary. The program is processed in a way to obtain the program electronically and then stored in the computer memory.
  • each part of the present invention can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic gate circuits for implementing logic functions on data signals
  • Logic circuits application specific integrated circuits with suitable combinational logic gates
  • PGA programmable gate array
  • FPGA field programmable gate array
  • the functional units in the various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

A tumor image processing method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining an original image by scanning a target organ (S101); performing rough recognition on the original image to obtain, from the original image, area data of an area where the target organ is located (S102); performing precise recognition on the area data to obtain first data of the target organ and second data of a tumor on the target organ (S103); performing, according to the first data and the second data, two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ (S104), such that high-precision visualization results of the target organ and the tumor thereon can be obtained to enable doctors to more accurately determine the position, size, and shape of the target organ and the tumor on the target organ according to the processed identification image, thus facilitating diagnosis and treatment.

Description

[根据细则37.2由ISA制定的发明名称] 肿瘤图像的处理方法和装置、电子设备、存储介质[Name of invention formulated by ISA according to Rule 37.2]  Tumor image processing method and device, electronic equipment, storage medium
相关申请的交叉引用Cross-references to related applications
本申请要求于2020年05月29日提交中国专利局、申请号为202010474294.7、发明名称为“肿瘤图像的处理方法和装置、电子设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 202010474294.7, and the invention title is "Tumor Image Processing Method and Apparatus, Electronic Equipment, Storage Medium" on May 29, 2020. The entire content of the application is approved The reference is incorporated in this application.
技术领域Technical field
本发明涉及医疗图像处理技术领域,尤其涉及一种肿瘤图像的处理方法和装置、电子设备、存储介质。The present invention relates to the technical field of medical image processing, in particular to a method and device for processing tumor images, electronic equipment, and storage media.
背景技术Background technique
目前医生进行肿瘤检查主要使用CT扫描,一次CT扫描至少产生大小500M以上的DICOM数据,每例DICOM数据300张以上的断层图像,需要医生逐张检查,需要耗时30分钟以上。如果确诊以后,需要由影像科医生进行三维重建,辅助临床大夫进行手术规划,通常需要60分钟以上。医生劳动强度大且效率低下。随着图像处理技术的发展,图像处理逐渐应用于医疗图像的处理中,以降低医生逐张检查的耗时。At present, doctors mainly use CT scans for tumor inspections. A CT scan produces at least DICOM data of more than 500M in size. Each case of DICOM data with more than 300 tomographic images requires doctors to check them one by one, which takes more than 30 minutes. If the diagnosis is made, the imaging surgeon needs to perform three-dimensional reconstruction to assist the clinical doctor in surgical planning, which usually takes more than 60 minutes. Doctors are labor intensive and inefficient. With the development of image processing technology, image processing is gradually applied to the processing of medical images to reduce the time it takes for doctors to check one by one.
发明内容Summary of the invention
本公开实施例提出了一种肿瘤图像的处理方法,包括以下步骤:获取目标器官的原始图像;对所述原始图像进行粗识别,以从所述原始图像中得到目标器官所在区域的区域数据;对所述区域数据进行精识别,获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据;根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画和/或三维重建。The embodiment of the present disclosure proposes a method for processing a tumor image, which includes the following steps: obtaining an original image of a target organ; performing rough recognition on the original image to obtain regional data of the area where the target organ is located from the original image; Perform precise identification of the regional data, obtain the first data of the target organ and the second data of the tumor on the target organ; according to the first data and the second data, the target organ and The tumor on the target organ is delineated in two dimensions and/or reconstructed in three dimensions.
在一些实施例中,所述对所述原始图像进行粗识别,以从所述原始图像中得到目标器官所在区域的区域数据,包括:将所述原始图像输入具有三层阶跃连接的U型网络结构,以得到第一图像;对所述第一图像中的各位置点进行三维坐标映射,以得到所述各位置点的空间坐标;根据所述空间坐标确定所述目标器官的区域数据。In some embodiments, the performing rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image includes: inputting the original image into a U-shape with a three-layer step connection Network structure to obtain the first image; perform three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinate of each position point; determine the area data of the target organ according to the space coordinate.
在一些实施例中,所述根据所述空间坐标确定所述目标器官的区域数据,包括:对所述 空间坐标进行聚类分析,获取所述第一图像中所述各位置点对应的重心位置;获取所述各位置点与所述重心位置之间的第一距离;识别所述第一距离中的最大值,并将所述最大值作为剪切半径;按照所述剪切半径对所述重心位置进行剪切,以使剪切后的区域作为所述目标器官的区域数据。In some embodiments, the determining the region data of the target organ according to the spatial coordinates includes: performing a cluster analysis on the spatial coordinates to obtain the center of gravity position corresponding to each position point in the first image ; Obtain the first distance between each of the position points and the center of gravity position; identify the maximum value in the first distance, and use the maximum value as the shear radius; Clipping is performed at the center of gravity, so that the clipped area is used as the area data of the target organ.
在一些实施例中,所述识别所述第一距离中的最大值,并将所述最大值作为剪切半径,包括:获取所述剪切半径的冗余量,将所述最大值和所述冗余量的和,作为所述剪切半径。In some embodiments, the identifying the maximum value in the first distance and using the maximum value as a shearing radius includes: obtaining a redundancy of the shearing radius, and combining the maximum value and the shearing radius. The sum of the redundancy amounts is used as the shear radius.
在一些实施例中,所述对所述区域数据进行精识别,获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据,包括:获取所述区域数据对应的区域图像;将所述区域图像,输入具有稀疏连接模块和多级残差模块的3DU型网络结构;利用所述3DU型网络结构对所述区域图像进行精识别,获取经过所述3DU型网络结构进行精识别的第二图像;对所述第二图像进行解析,以获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据。In some embodiments, the precise identification of the region data to obtain the first data of the target organ and the second data of the tumor on the target organ includes: obtaining a region image corresponding to the region data Input the regional image into a 3DU network structure with a sparse connection module and a multi-level residual module; use the 3DU network structure to finely identify the regional image, and obtain the 3DU network structure for precision Recognized second image; analyzing the second image to obtain the first data of the target organ and the second data of the tumor on the target organ.
在一些实施例中,所述获取经过所述3DU型网络结构进行精识别的第二图像,包括:利用所述3DU型网络结构对所述区域数据进行上采样,获取第一特征图;将所述第一特征图输入所述稀疏连接模块,以获取第二特征图;将所述第二特征图输入所述多级残差模块,以获取第三特征图;对所述第三特征图进行上采样,以获取所述第二图像。In some embodiments, the acquiring the second image that has been finely identified through the 3DU-type network structure includes: using the 3DU-type network structure to up-sample the area data to acquire a first feature map; The first feature map is input to the sparse connection module to obtain a second feature map; the second feature map is input to the multi-level residual module to obtain a third feature map; the third feature map is performed Up-sampling to obtain the second image.
在一些实施例中,所述稀疏连接模块包括四个级联分支。In some embodiments, the sparse connection module includes four cascaded branches.
在一些实施例中,所述多级残差模块采用三种尺度的池化操作。In some embodiments, the multi-level residual module adopts three-scale pooling operations.
在一些实施例中,所述根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画,包括:获取对所述目标器官的选择指令,其中,所述选择指令包含对所述目标器官的选取位置;根据所述选择指令,从所述原始图像中提取与所述选取位置对应的目标原始图像;根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画。In some embodiments, the performing a two-dimensional delineation of the target organ and the tumor on the target organ according to the first data and the second data includes: obtaining a selection instruction for the target organ , Wherein the selection instruction includes a selection position of the target organ; according to the selection instruction, a target original image corresponding to the selected position is extracted from the original image; according to the first data and the The second data is a two-dimensional delineation of the target organ and the tumor on the target organ.
在一些实施例中,所述选取指令还包括图像角度,所述根据所述选择指令,从所述原始图像中提取与所述选取位置对应的目标原始图像,包括:根据所述选择指令,从所述原始图像中提取与所述选取位置对应且符合所述图像角度的目标原始图像。In some embodiments, the selection instruction further includes an image angle, and the extraction of the target original image corresponding to the selected position from the original image according to the selection instruction includes: according to the selection instruction, Extract a target original image corresponding to the selected position and conforming to the image angle from the original image.
根据本申请提出的肿瘤图像的处理方法,通过对原始图像进行先粗糙后精细的两次识别,从而能够获取精度较高的目标器官及其上肿瘤的,对目标器官及其上肿瘤进行二维勾画和/或三维重建,使得医生根据本申请处理后的标识图像能够更准确的确定目标器官及其上肿瘤的位置、大小及形状等信息,便于诊断和治疗。According to the tumor image processing method proposed in this application, by recognizing the original image twice, firstly coarse and then finely, the target organ and its upper tumor can be obtained with higher precision, and the target organ and its upper tumor can be obtained in two dimensions. Delineation and/or three-dimensional reconstruction enable the doctor to more accurately determine the location, size and shape of the target organ and the tumor on it based on the identification image processed by this application, which is convenient for diagnosis and treatment.
本公开实施例提出了一种肿瘤图像的处理装置,包括:获取模块,用于获取针对目标器官拍摄的原始图像;第一识别模块,用于对所述原始图像进行粗识别,以从所述原始图像中得到目标器官所在区域的区域数据;第二识别模块,用于对所述区域数据进行精识别,获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据;标识模块,用于根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画和/或三维重建。The embodiment of the present disclosure proposes a tumor image processing device, which includes: an acquisition module for acquiring an original image taken for a target organ; a first recognition module for performing rough recognition on the original image to obtain information from the Obtain the area data of the area where the target organ is located from the original image; the second recognition module is used to accurately recognize the area data, and obtain the first data of the target organ and the second data of the tumor on the target organ; The identification module is configured to perform two-dimensional delineation and/or three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data.
本公开实施例提出了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现上述肿瘤图像的处理方法。The embodiment of the present disclosure proposes an electronic device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the above-mentioned tumor image processing method when the program is executed.
本公开实施例提出了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述肿瘤图像的处理方法。The embodiment of the present disclosure proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the above-mentioned tumor image processing method is realized.
附图说明Description of the drawings
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above-mentioned and/or additional aspects and advantages of the present invention will become obvious and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, in which:
图1为本发明实施例所提供的一种肿瘤图像的处理方法的流程图;FIG. 1 is a flowchart of a method for processing a tumor image provided by an embodiment of the present invention;
图2为本发明实施例所提供的另一种肿瘤图像的处理方法的流程图;FIG. 2 is a flowchart of another tumor image processing method provided by an embodiment of the present invention;
图3为本发明实施例所提供的简化Unet模型的结构示意图;FIG. 3 is a schematic structural diagram of a simplified Unet model provided by an embodiment of the present invention;
图4为本发明实施例所提供的另一种肿瘤图像的处理方法的流程图;4 is a flowchart of another method for processing tumor images provided by an embodiment of the present invention;
图5为本发明实施例所提供的经过简化Unet模型处理后的效果示意图;FIG. 5 is a schematic diagram of the effect after the simplified Unet model processing provided by the embodiment of the present invention;
图6为本发明实施例所提供的又一种肿瘤图像的处理方法的流程图;FIG. 6 is a flowchart of another method for processing a tumor image provided by an embodiment of the present invention;
图7为本发明实施例所提供的具有稀疏连接模块和多级残差模块的3DU型网络结构的结构示意图;FIG. 7 is a schematic structural diagram of a 3DU network structure with a sparse connection module and a multi-level residual module provided by an embodiment of the present invention;
图8为本发明实施例所提供的再一种肿瘤图像的处理方法的流程图;FIG. 8 is a flowchart of another tumor image processing method provided by an embodiment of the present invention;
图9为本发明实施例所提供的稀疏连接模块的结构示意图;FIG. 9 is a schematic structural diagram of a sparse connection module provided by an embodiment of the present invention;
图10为本发明实施例所提供的多级残差模块的结构示意图;10 is a schematic structural diagram of a multi-level residual module provided by an embodiment of the present invention;
图11为本发明实施例的训练样本集合的原理示意图;FIG. 11 is a schematic diagram of the principle of a training sample set according to an embodiment of the present invention;
图12为本发明实施例所提供的再一种肿瘤图像的处理方法的流程图;FIG. 12 is a flowchart of yet another tumor image processing method provided by an embodiment of the present invention;
图13为本发明实施例的一个具体对目标器官及其上肿瘤二维勾画的示意图;FIG. 13 is a schematic diagram of a specific two-dimensional delineation of a target organ and its tumor according to an embodiment of the present invention;
图14为本发明实施例的另一个具体对目标器官及其上肿瘤二维勾画和三维重建的示意图;14 is another schematic diagram of specific two-dimensional delineation and three-dimensional reconstruction of a target organ and its tumor according to an embodiment of the present invention;
图15为本发明实施例所提供的一种肿瘤图像的处理装置的方框示意图。Fig. 15 is a schematic block diagram of a tumor image processing apparatus provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。The embodiments of the present invention are described in detail below. Examples of the embodiments are shown in the accompanying drawings, in which the same or similar reference numerals indicate the same or similar elements or elements with the same or similar functions. The embodiments described below with reference to the drawings are exemplary, and are intended to explain the present invention, but should not be construed as limiting the present invention.
下面参考附图描述本发明实施例的肿瘤图像的处理方法和装置、电子设备、存储介质。The following describes the tumor image processing method and device, electronic equipment, and storage medium according to the embodiments of the present invention with reference to the accompanying drawings.
图1为本发明实施例所提供的一种肿瘤图像的处理方法的流程示意图。如图1所示,本发明实施例的肿瘤图像的处理方法,包括以下步骤:FIG. 1 is a schematic flowchart of a method for processing a tumor image provided by an embodiment of the present invention. As shown in Fig. 1, the method for processing a tumor image according to an embodiment of the present invention includes the following steps:
S101:获取目标器官的原始图像。S101: Obtain an original image of the target organ.
其中,原始图像例如为对患者目标器官拍摄的CT(Computed Tomography,电子计算机断层扫描)图像,一次CT扫描至少产生大小500M以上的DICOM(Digital Imaging and Communications in Medicine,医学数字成像和通信)数据,每例大约300张以上的断层图像。Among them, the original image is, for example, a CT (Computed Tomography) image taken of a patient’s target organ. A CT scan generates at least DICOM (Digital Imaging and Communications in Medicine) data with a size of more than 500M. Each case has more than 300 tomographic images.
S102:对原始图像进行粗识别,以从原始图像中得到目标器官所在区域的区域数据。S102: Perform rough recognition on the original image to obtain area data of the area where the target organ is located from the original image.
S103:对区域数据进行精识别,获取目标器官的第一数据以及目标器官上的肿瘤的第二数据。S103: Perform precise recognition on the area data, and obtain the first data of the target organ and the second data of the tumor on the target organ.
其中,第一数据可包括目标器官的位置和大小等信息,第二信息可包括肿瘤的位置、大小及形状等信息。Among them, the first data may include information such as the location and size of the target organ, and the second information may include information such as the location, size, and shape of the tumor.
S104:根据第一数据和第二数据,对目标器官和目标器官上的肿瘤进行二维勾画和/或三维重建。S104: Perform two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
由此,本申请提出的肿瘤图像的处理方法,通过对原始图像进行先粗糙后精细的两次识别,从而能够获取精度较高的目标器官及其上肿瘤的,并对目标器官及其上肿瘤进行二维勾画和/或三维重建,使得医生根据本申请处理后的标识图像能够更准确的确定目标器官及其上肿瘤的位置、大小及形状等信息,便于诊断和治疗。Therefore, the tumor image processing method proposed in this application can obtain the target organ and its tumor with higher accuracy by recognizing the original image twice, firstly coarse and then finely. Performing two-dimensional delineation and/or three-dimensional reconstruction enables the doctor to more accurately determine the location, size, and shape of the target organ and the tumor on it according to the identification image processed by this application, which is convenient for diagnosis and treatment.
根据发明的另一个实施例,如图2所示,对原始图像进行粗识别,以从原始图像中得到目标器官所在区域的区域数据,包括:According to another embodiment of the invention, as shown in FIG. 2, performing rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image includes:
S201:将原始图像输入具有三层阶跃连接的U型网络结构,以得到第一图像。S201: Input the original image into a U-shaped network structure with three-layer step connections to obtain a first image.
需要说明的是,在本申请实例中的U型结构网络例如为简化Unet模型,其中,Unet模型具有结构简洁、深浅语义结合紧密的特点,本申请对Unet模型进行优化,如图3所示, 将传统Unet模型结构中的第四层连接去掉,调整为三层阶跃连接,能够使模型减少1/2参数量,有效提升运行效率。It should be noted that the U-shaped structure network in the example of this application is, for example, a simplified Unet model, where the Unet model has the characteristics of simple structure and close combination of deep and shallow semantics. This application optimizes the Unet model, as shown in Figure 3. Removing the fourth-layer connection in the traditional Unet model structure and adjusting it to a three-layer step connection can reduce the amount of parameters by 1/2 of the model and effectively improve the operating efficiency.
需要说明的是,由于采用CT扫描图像进行整图识别,会产生较大误差,尤其例如肾脏肿瘤识别,双肾以外会留有大量的背景空间,不仅增加图像处理的计算量,同时还会影响识别结果的准确率。因此,本申请进一步对通过U型网络结构获取到的图像进行处理,以确定目标器官所处区域的区域数据,即,使得后续精识别的输入数据为切除无关信息的只包含目标器官所处区域的数据,有效降低精识别的数据处理量。其中,以肾脏为例,精识别的输入数据可为只包含左肾和右肾的区域数据。It should be noted that due to the use of CT scan images for full image recognition, large errors will occur, especially for kidney tumor recognition. There will be a lot of background space outside the kidneys, which not only increases the amount of calculation for image processing, but also affects The accuracy of the recognition result. Therefore, this application further processes the images obtained through the U-shaped network structure to determine the area data of the area where the target organ is located, that is, the input data for subsequent precision recognition is the resection irrelevant information and only contains the area where the target organ is located. Data, effectively reducing the amount of data processing for precise identification. Among them, taking the kidney as an example, the input data for fine recognition can be regional data that only includes the left kidney and the right kidney.
S202:对第一图像中的各位置点进行三维坐标映射,以得到各位置点的空间坐标。S202: Perform three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinate of each position point.
S203:根据空间坐标确定目标器官的区域数据。S203: Determine the area data of the target organ according to the space coordinates.
具体地,如图4所示,根据空间坐标确定目标器官的区域数据,包括:Specifically, as shown in Fig. 4, determining the area data of the target organ according to the spatial coordinates includes:
S301:对空间坐标进行聚类分析,获取第一图像中各位置点对应的重心位置。S301: Perform cluster analysis on the spatial coordinates, and obtain the center of gravity position corresponding to each position point in the first image.
S302:获取各位置点与重心位置之间的第一距离。S302: Acquire the first distance between each position point and the center of gravity position.
S303:识别第一距离中的最大值,并将最大值作为剪切半径。S303: Identify the maximum value in the first distance, and use the maximum value as the shear radius.
S304:按照剪切半径对重心位置进行剪切,以使剪切后的区域作为目标器官的区域数据。S304: Cut the position of the center of gravity according to the cutting radius, so that the cut area is used as the area data of the target organ.
应当理解的是,当目标器官为肾脏时,由于人有双肾,因此,重心位置可为两个,也就是说,经过聚类分析后可根据聚类结果寻找到两个重心位置,从而根据剪切出两个目标器官的区域数据,其中,聚类效果可如图5所示。It should be understood that when the target organ is the kidney, since a person has both kidneys, there can be two positions of the center of gravity. Cut out the regional data of the two target organs, where the clustering effect can be shown in Figure 5.
进一步地,为了避免剪切误差,还可增加弹性尺度,即,剪切半径的冗余量,其中,冗余量可为,其中,可为原始图像的尺寸,剪切尺寸,其中,可为重心位置与此点的距离。Further, in order to avoid the shearing error, the elasticity scale, that is, the redundancy of the shearing radius, can be increased, where the redundancy can be The distance between the center of gravity and this point.
应当理解的是,由于一例CT数据包含的目标器官部分例如肾脏部分,只占整体数据的10%,大量无关信息加入会导致模型过拟合,使肾脏与肿瘤识别难度加大,因此,通过本申请提出的粗识别切除无关信息可以实现提升数据纯净度,使训练出来的模型识别效果更好。It should be understood that since a case of CT data contains a target organ part such as the kidney part, which only accounts for 10% of the overall data, the addition of a large amount of irrelevant information will cause the model to overfit and make the identification of the kidney and the tumor more difficult. Therefore, through this The rough recognition and removal of irrelevant information proposed by the application can improve the purity of data and make the recognition effect of the trained model better.
由此,本申请通过粗识别能够提取出目标器官的区域数据,以便于为后续精识别过程减少数据处理量,有效提高图像图像处理的精度和运算速度。Therefore, the present application can extract the region data of the target organ through rough recognition, so as to reduce the amount of data processing for the subsequent fine recognition process, and effectively improve the accuracy and operation speed of image processing.
根据本申请的又一个实施例,如图6所示,对区域数据进行精识别,以生成含有肿瘤数据的目标器官图像,包括:According to another embodiment of the present application, as shown in FIG. 6, performing precise recognition on the area data to generate a target organ image containing tumor data includes:
S401:获取区域数据对应的区域图像。S401: Acquire an area image corresponding to the area data.
其中,区域数据可为通过粗识别获取到的包含有病灶和肿瘤信息的数据块,由于3DU型网络是图像处理网络,因此需要将数据块图片化以进行精识别。Among them, the regional data can be data blocks containing lesions and tumor information obtained through rough recognition. Since the 3DU network is an image processing network, the data blocks need to be pictured for precise recognition.
S402:将区域图像,输入具有稀疏连接模块和多级残差模块的3DU型网络结构。S402: Input the regional image into a 3DU network structure with a sparse connection module and a multi-level residual module.
需要说明的是,本申请提出的具有稀疏连接模块(S模块)和多级残差模块(P模块)的3DU型网络结构可表达为SP-3DUnet模型,即,在3DUnet模型中增加稀疏连接模块(S模块)和多级残差模块(P模块),具体地,本申请将稀疏连接模块(S模块)和多级残差模块(P模块)添加至3DUnet模型的最底层,即,3DUnet模型的下采样之后、上采样之前。It should be noted that the 3DU network structure with sparse connection module (S module) and multi-level residual module (P module) proposed in this application can be expressed as the SP-3DUnet model, that is, the sparse connection module is added to the 3DUnet model (S module) and a multi-level residual module (P module). Specifically, this application adds a sparse connection module (S module) and a multi-level residual module (P module) to the bottom layer of the 3DUnet model, that is, the 3DUnet model After the down-sampling and before the up-sampling.
进一步地,本申请还对3DUnet模型进行优化,即,每层卷积层在原模型基础上去掉一次卷积操作,以减少1/2参数,减少模型体积,但是,这种操作会相应的降低模型对图像的深层语义的提取能力,因此,本申请进一步在优化的3DU型模型的底层添加稀疏连接模块(S模块)与多级残差模块(P模块),如图7所示,以抵抗语义表达能力的损失,这种操作可在保证只增加1/5参数量的前提下,提升算法识别准确率与运行效率。Furthermore, this application also optimizes the 3DUnet model, that is, each convolutional layer removes a convolution operation on the basis of the original model to reduce 1/2 parameters and reduce the model volume. However, this operation will reduce the model accordingly. The ability to extract the deep semantics of images. Therefore, this application further adds a sparse connection module (S module) and a multi-level residual module (P module) to the bottom layer of the optimized 3DU model, as shown in Figure 7, to resist semantics Loss of expression ability, this operation can improve the accuracy of algorithm recognition and operating efficiency under the premise of ensuring that only 1/5 of the parameter amount is increased.
S403:利用3DU型网络结构对区域图像进行精识别,获取经过3DU型网络结构进行精识别的第二图像。S403: Use the 3DU network structure to perform fine recognition on the regional image, and obtain a second image that has been finely recognized by the 3DU network structure.
作为一个具体实施例,如图8-图10所示,步骤S403获取经过3DU型网络结构进行精识别的第二图像,包括:As a specific embodiment, as shown in Figs. 8-10, step S403 acquiring a second image that has been finely recognized through a 3DU-type network structure includes:
S4031:利用3DU型网络结构对区域数据进行上采样,获取第一特征图。S4031: Up-sampling the regional data using the 3DU network structure to obtain the first feature map.
S4032:将第一特征图输入稀疏连接模块,以获取第二特征图。S4032: Input the first feature map into the sparse connection module to obtain a second feature map.
需要说明的是,本申请提出以空洞卷积为基础的稀疏连接来编码高级语义特征映射,其中,空洞卷积以级联的方式堆叠。具体地,在本申请实施例中,稀疏连接模块分为四个级联分支。It should be noted that this application proposes a sparse connection based on hole convolution to encode high-level semantic feature mapping, where the hole convolution is stacked in a cascaded manner. Specifically, in the embodiment of the present application, the sparse connection module is divided into four cascaded branches.
作为一个具体实施例,如图9所示,由于后续三维模型对输出数据的要求以及本申请实际GPU(Graphics Processing Unit,图形处理器)硬件性能的限制,可采用1*1*1和3*3*3两种空洞卷积,对应每个分支的感受域为3*3*3、7*7*7和9*9*9。As a specific embodiment, as shown in FIG. 9, due to the requirements of the subsequent 3D model for output data and the limitation of the actual GPU (Graphics Processing Unit, graphics processor) hardware performance of this application, 1*1*1 and 3* can be used. 3*3 two kinds of hollow convolution, corresponding to the receptive field of each branch are 3*3*3, 7*7*7 and 9*9*9.
应当理解的是,在使用稀疏连接模块时,大感受域的卷积可以为大目标提取和生成更抽象的特征,而小感受域的卷积可以对小目标提取和生成更抽象的特征。本申请通过组合不同扩张率的空洞卷积,稀疏连接模块能够提取具有各种尺寸目标的特征,建立起底层关键语义特征的稀疏连接方式,抵消由于减少卷积操作导致的语义信息描述损失。It should be understood that when the sparse connection module is used, the convolution of the large receptive field can extract and generate more abstract features for the large target, and the convolution of the small receptive field can extract and generate more abstract features for the small target. In this application, by combining hole convolutions with different expansion rates, the sparse connection module can extract features with various size targets, establish a sparse connection mode of underlying key semantic features, and offset the semantic information description loss caused by reducing the convolution operation.
S4033:将第二特征图输入多级残差模块,以获取第三特征图。S4033: Input the second feature map to the multi-level residual module to obtain a third feature map.
需要说明的是,由于目标器官上肿瘤大小不一,晚期患者肿瘤大小可能会超过目标器官体积2/3,当肿瘤体积大于或者小于目前数据集涵盖的大小,可能会造成算法性能的下降,因此,本申请提出采用一个多级残差模块来增强算法对多尺寸目标的识别泛化能力。多级残 差模块采用不同尺度的池化操作可提取不同尺度的特征信息,提升对不同大小的目标的识别能力。It should be noted that due to the different tumor sizes on the target organs, the tumor size in advanced patients may exceed 2/3 of the target organ volume. When the tumor volume is larger or smaller than the size covered by the current data set, the performance of the algorithm may decrease, so , This application proposes to use a multi-level residual module to enhance the algorithm's ability to recognize and generalize multi-size targets. The multi-level residual module uses pooling operations of different scales to extract feature information of different scales and improve the ability to recognize targets of different sizes.
作为一个具体实施例,如图10所示,采用三种尺度池化操作,三个分支输出三种尺度的特征域。为了降低权重的维度和计算成本,还可在每个池化分支后使用1*1*1卷积,它可将特征域的尺寸减小到原尺寸的1/N,其中N表示原始特征域中的通道数。As a specific embodiment, as shown in FIG. 10, a three-scale pooling operation is adopted, and the three branches output three-scale feature domains. In order to reduce the dimensionality and computational cost of weights, 1*1*1 convolution can also be used after each pooling branch, which can reduce the size of the feature domain to 1/N of the original size, where N represents the original feature domain The number of channels in.
S4034:对第三特征图进行上采样,以获取第二图像。S4034: Up-sampling the third feature map to obtain a second image.
其中,对减小到原尺寸的1/N的第三特征图通过双线性插值获得与原始特征图相同的尺寸特征。Among them, the third feature map reduced to 1/N of the original size is subjected to bilinear interpolation to obtain the same size feature as the original feature map.
S404:对第二图像进行解析,以获取包含所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据。S404: Analyze the second image to obtain the first data including the target organ and the second data including the tumor on the target organ.
也就是说,在通过具有稀疏连接模块和多级残差模块的3DU型网络结构对图像进行精识别之后的图像为特征图像,需要经过解析,从而获取获取包含目标器官的第一数据以及目标器官上的肿瘤的第二数据。举例来说,可以将表示目标器官的位置标记为1,目标器官上的肿瘤的位置标记为2,其他位置标记为0,使得目标器官的位置集合为目标器官的第一数据,能够表达出目标器官的位置和大小,同理,目标器官上的肿瘤的位置集合为目标器官上的肿瘤的第二数据。That is to say, the image after the precise recognition of the image through the 3DU network structure with the sparse connection module and the multi-level residual module is a characteristic image, which needs to be analyzed to obtain the first data containing the target organ and the target organ The second data on the tumor. For example, the position of the target organ can be marked as 1, the position of the tumor on the target organ is marked as 2, and the other positions are marked as 0, so that the position set of the target organ is the first data of the target organ, which can express the target The position and size of the organ, similarly, the position of the tumor on the target organ is set as the second data of the tumor on the target organ.
需要说明的是,上述粗识别和精识别采用的简化U型网络结构和具有稀疏连接模块和多级残差模块的3DU型网络结构均需要进行深度学习的训练,以使在使用时能够准确的识别出目标器官的第一数据和目标器官上的肿瘤的第二数据。It should be noted that the simplified U-shaped network structure and the 3DU-type network structure with sparse connection modules and multi-level residual modules used in the above-mentioned rough recognition and fine recognition need to be trained in deep learning, so that they can be used accurately. Identify the first data of the target organ and the second data of the tumor on the target organ.
进一步地,在训练过程中,可将多个已经人工标注出目标器官和肿瘤位置的原始图像作为训练样本集进行输入,其中,为了进一步使训练结果更准确,还可通过对人工标注的训练样本集进行反转、平移、旋转、变形等方式进行训练样本集的扩充。Furthermore, in the training process, multiple original images that have been manually marked with target organs and tumor locations can be input as training sample sets. Among them, in order to further make the training results more accurate, it can also be used to manually mark the training samples The training sample set is expanded by inversion, translation, rotation, deformation, etc.
本申请采用Dice系数损失函数,其中,Dice系数具体地采用如下公式进行表达:This application uses the Dice coefficient loss function, where the Dice coefficient is specifically expressed by the following formula:
其中,N是像素个数标签,p(k,i)∈[0,1]和g(k,i)∈{0,1}分别表示类k的预测概率和真实标签,p(k,i)为解析后的目标器官图像中每个像素点的预存概率,k是类别,是权重,在本申请实施例中,可设置。Among them, N is the label of the number of pixels, p(k,i)∈[0,1] and g(k,i)∈{0,1} represent the predicted probability and true label of class k, respectively, p(k,i ) Is the pre-stored probability of each pixel in the parsed target organ image, and k is the category and the weight, which can be set in the embodiment of the present application.
采用Dice系数的损失函数可为:Lloss=Ldice+Lreg其中,Lreg表示用于避免过拟合的正则化损失项(也称为权重衰减)。。The loss function using the Dice coefficient can be: Lloss=Ldice+Lreg, where Lreg represents a regularization loss term used to avoid overfitting (also called weight attenuation). .
还应当理解的是,在对简化U型网络结构和具有稀疏连接模块和多级残差模块的3DU型 网络结构均需要进行深度学习时,还可通过医生的手检来验证图像识别的准确性,如果出现识别误差,则通过画笔功能进行校正,并将校正结果替换原来的识别结果,例如校正结果自动回传云端训练数据库更新训练样本集,以对模型进行重新训练,如图11所示。It should also be understood that when deep learning is required for the simplified U-shaped network structure and the 3DU-type network structure with sparse connection modules and multi-level residual modules, the accuracy of image recognition can also be verified by the doctor’s hand examination. If there is a recognition error, use the pen function to correct it, and replace the original recognition result with the correction result. For example, the correction result is automatically sent back to the cloud training database to update the training sample set to retrain the model, as shown in Figure 11.
作为另一个可行实施例,如图12所示,根据第一数据和第二数据,对目标器官和目标器官上的肿瘤进行二维勾画,包括:As another feasible embodiment, as shown in FIG. 12, the two-dimensional delineation of the target organ and the tumor on the target organ according to the first data and the second data includes:
S501:获取对目标器官的选择指令。S501: Obtain a selection instruction for a target organ.
其中,选择指令包含对目标器官的选取位置。Among them, the selection instruction includes the selection position of the target organ.
S502:根据选择指令,从原始图像中提取与选取位置对应的目标原始图像。S502: According to the selection instruction, extract the target original image corresponding to the selected position from the original image.
在一些实施例中,选择指令还可包括图像角度,由此,需要从原始图像中提取与选取位置对应且符合图像角度的目标原始图像。In some embodiments, the selection instruction may further include an image angle, and therefore, a target original image corresponding to the selected position and conforming to the image angle needs to be extracted from the original image.
S503:根据第一数据和第二数据,在原始图像上对目标器官和目标器官上的肿瘤进行二维勾画。S503: Perform a two-dimensional delineation of the target organ and the tumor on the target organ on the original image according to the first data and the second data.
也就是说,在对目标器官的原始图像进行识别之后,可以根据第一数据和第二数据在原始图像上对目标器官和目标器官上肿瘤进行二维勾画,使得医生和患者能够明确获知病灶情况。同时,可以帮助医生在根据CT图像进行医疗诊断时集中注意力针对包含肾脏的图片进行识别,有效提升诊疗效率实现精准诊疗的目的,而且能够防止漏诊。进一步地,本申请通过对目标器官及其上肿瘤进行三维重建,能够有效提升医患沟通时的可视化程度。In other words, after recognizing the original image of the target organ, the target organ and the tumor on the target organ can be outlined in two dimensions on the original image based on the first data and the second data, so that doctors and patients can clearly know the condition of the lesion . At the same time, it can help doctors focus on identifying pictures containing kidneys when performing medical diagnosis based on CT images, effectively improving the efficiency of diagnosis and treatment, achieving the purpose of precise diagnosis and treatment, and preventing missed diagnosis. Furthermore, the present application can effectively improve the visualization of doctor-patient communication by performing three-dimensional reconstruction of the target organ and its tumor.
具体而言,可获取目标器官的选择指令,并从选择指令中提取医生和/或患者选取的位置和角度,根据位置和角度从原始图像中提取该位置和角度对应的断层图像,然后判断图像中各位置点是否属于第一数据和/或第二数据,如果是,则根据第一数据和/或第二数据进行标识,如果否,则不进行操作。Specifically, the target organ selection instruction can be obtained, and the position and angle selected by the doctor and/or patient can be extracted from the selection instruction, and the tomographic image corresponding to the position and angle can be extracted from the original image according to the position and angle, and then the image can be judged Whether each position point in the data belongs to the first data and/or the second data, if it is, it is identified according to the first data and/or the second data, and if it is not, no operation is performed.
举例来说,如图13所示,左侧图像为以人体直立状态下俯视角度对肾脏扫描的断层图像,具体地,该图像为520个断层图像中第286层对应的图像,右侧图像为对左侧图像中的肾脏和肾脏肿瘤进行标识后的图像。进一步地,图像角度还可包括人体直立状态下的侧视角度和后视角度,如图14所示,其中,图14中左上角的图像为人体直立状态下的侧视角度的断层图像,右上角的图像为人体直立状态下的俯视角度的断层图像,右下角的图像为人体直立状态下的后视角度的断层图像,左下角的图像为目标器官及其上肿瘤的三维模型立体图。For example, as shown in FIG. 13, the left image is a tomographic image of the kidney scanned from a bird's-eye view angle when the human body is upright. Specifically, the image is the image corresponding to the 286th layer of the 520 tomographic images, and the right image is The image after marking the kidney and kidney tumor in the left image. Further, the image angle may also include the side view angle and the back view angle when the human body is upright, as shown in FIG. 14, where the image in the upper left corner in FIG. The image in the corner is a tomographic image of the human body upright from the top angle, the image in the lower right corner is the tomographic image of the human body upright in the rear view angle, and the image in the lower left corner is the three-dimensional model of the target organ and its upper tumor.
或者,在精识别获取第一数据和第二数据之后,根据第一数据和第二数据对目标器官和目标器官上的肿瘤进行三维建模,以获取目标器官及其上肿瘤的三维模型,如图14所示,然后再根据选取指令在三维模型中提取相应的标识数据并在原始图像上进行二维勾画。Or, after the first data and the second data are obtained by fine recognition, the target organ and the tumor on the target organ are 3D modeled according to the first data and the second data to obtain a 3D model of the target organ and the tumor on the target organ, such as As shown in Figure 14, the corresponding identification data is extracted from the three-dimensional model according to the selection instruction and a two-dimensional outline is performed on the original image.
应当理解的是,如图14中左下角的展示图可知,本申请实施例还可根据第一数据和第二数据对目标器官和目标器官上的肿瘤进行三维重建,并直接将三维重建后的立体图像进行展示。It should be understood that, as can be seen from the display in the lower left corner of FIG. 14, the embodiment of the present application can also perform three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data, and directly reconstruct the three-dimensionally reconstructed tumor. Stereoscopic images are displayed.
还应当理解的是,在通过显示终端进行展示时,可对上述图13-图14中的多个图像进行组合,以方便医生和/或患者通过同时展示的多幅图像之间的对应关系,即,同一目标器官位置和/或同一肿瘤位置在不同角度下的形态与大小等。It should also be understood that when displaying through the display terminal, the multiple images in Figure 13 to Figure 14 above can be combined to facilitate doctors and/or patients through the correspondence between multiple images displayed at the same time. That is, the shape and size of the same target organ location and/or the same tumor location at different angles.
进一步地,根据识别出的目标器官和肿瘤区域的三维重建模型,可以使医生通过模型结果准确的获取肿瘤在目标器官处的位置与形态,从而能够方便医生根据三维重建模型进行医疗规划,例如手术规划、放疗规划和化疗规划等。Further, based on the identified three-dimensional reconstruction model of the target organ and tumor area, the doctor can accurately obtain the position and shape of the tumor at the target organ through the model result, so that it is convenient for the doctor to perform medical planning, such as surgery, based on the three-dimensional reconstruction model. Planning, radiotherapy planning and chemotherapy planning, etc.
举例来说,在手术规划时,可根据采用本申请提出的方法确定的肿瘤位置,帮助医生找到精准定位,减少手术时间、提升手术质量,而且由于定位准确还能够避免手术刀口错位,从而减小手术刀口尺寸,加速患者伤口预设速度,减小患者痛苦;在放疗规划时,能够通过本申请提出的方法确定的肿瘤大小和位置,帮助医生规划放疗射线强度,从而减少对正常部位的放疗影响;在化疗规划时,通过本申请提出的方法确定的肿瘤位置、大小及形状等信息,帮助医生规划用药剂量,减轻对患者正常细胞的影响,从而减轻患者治疗痛苦。For example, during surgical planning, the tumor position determined by the method proposed in this application can be used to help doctors find precise positioning, reduce operation time, and improve the quality of the operation. Moreover, due to the accurate positioning, the dislocation of the surgical incision can be avoided, thereby reducing The size of the surgical knife edge accelerates the preset speed of the patient’s wound and reduces the patient’s pain; during radiotherapy planning, the tumor size and location can be determined by the method proposed in this application to help doctors plan the intensity of radiotherapy rays, thereby reducing the impact of radiotherapy on normal parts During chemotherapy planning, the location, size, and shape of the tumor determined by the method proposed in this application can help doctors plan the dosage of the drug, reduce the impact on the patient’s normal cells, and thereby alleviate the patient’s pain in treatment.
综上所述,本申请提出的肿瘤图像的处理方法,通过对原始图像进行先粗糙后精细的两次识别,从而能够获取精度较高的目标器官及其上肿瘤的,对目标器官及其上肿瘤进行二维勾画和/或三维重建,使得医生根据本申请处理后的目标器官图像能够更准确的确定目标器官位置、大小及形状等信息,便于诊断和治疗。In summary, the tumor image processing method proposed in the present application recognizes the original image twice, firstly coarse and then finely, so as to obtain high-precision target organs and tumors on the target organs. The tumor is delineated in two dimensions and/or reconstructed in three dimensions, so that the doctor can more accurately determine the position, size, and shape of the target organ according to the target organ image processed by the application, and facilitate diagnosis and treatment.
为了实现上述实施例,本发明还提出一种肿瘤图像的处理装置。In order to implement the above embodiments, the present invention also provides a tumor image processing device.
图15为本发明实施例所提供的一种肿瘤图像的处理装置的方框示意图。如图15所示,该肿瘤图像的处理装置10包括:获取模块11、第一识别模块12、第二识别模块13和标识模块14。FIG. 15 is a schematic block diagram of a tumor image processing apparatus provided by an embodiment of the present invention. As shown in FIG. 15, the apparatus 10 for processing tumor images includes: an acquisition module 11, a first identification module 12, a second identification module 13 and an identification module 14.
其中,获取模块11,用于获取针对目标器官拍摄的原始图像。Wherein, the acquisition module 11 is used to acquire the original image taken for the target organ.
第一识别模块12,用于对所述原始图像进行粗识别,以从所述原始图像中得到目标器官所在区域的区域数据。The first recognition module 12 is configured to perform rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image.
第二识别模块13,用于对所述区域数据进行精识别,获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据。The second recognition module 13 is configured to perform precise recognition on the area data, and obtain the first data of the target organ and the second data of the tumor on the target organ.
标识模块14,用于根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画及三维重建。The identification module 14 is configured to perform two-dimensional delineation and three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data.
进一步地,第一识别模块12,具体用于:将所述原始图像输入具有三层阶跃连接的U型网络结构,以得到第一图像;对所述第一图像中的各位置点进行三维坐标映射,以得到所述各位置点的空间坐标;根据所述空间坐标确定所述目标器官的区域数据。Further, the first recognition module 12 is specifically configured to: input the original image into a U-shaped network structure with three layers of step connections to obtain a first image; Coordinate mapping to obtain the space coordinates of each position point; and determine the area data of the target organ according to the space coordinates.
进一步地,第一识别模块12,具体用于:对所述空间坐标进行聚类分析,获取所述第一图像中所述各位置点对应的重心位置;获取所述各位置点与所述重心位置之间的第一距离;识别所述第一距离中的最大值,并将所述最大值作为剪切半径;按照所述剪切半径对所述重心位置进行剪切,以使剪切后的区域作为所述目标器官的区域数据。Further, the first identification module 12 is specifically configured to: perform a cluster analysis on the spatial coordinates to obtain the position of the center of gravity corresponding to the position point in the first image; and obtain the position of the center of gravity and each position point The first distance between the positions; identify the maximum value in the first distance, and use the maximum value as the shear radius; shear the center of gravity position according to the shear radius, so that after the shear The area of is used as the area data of the target organ.
进一步地,第一识别模块12,具体用于:获取所述剪切半径的冗余量,将所述最大值和所述冗余量的和,作为所述剪切半径。Further, the first identification module 12 is specifically configured to: obtain the redundancy of the shearing radius, and use the sum of the maximum value and the redundancy as the shearing radius.
进一步地,第二识别模块13,具体用于:获取所述区域数据对应的区域图像;将所述区域图像,输入具有稀疏连接模块和多级残差模块的3DU型网络结构;利用所述3DU型网络结构对所述区域图像进行精识别,获取经过所述3DU型网络结构进行精识别的第二图像;对所述第二图像进行解析,以生成所述目标器官图像。Further, the second recognition module 13 is specifically configured to: obtain the area image corresponding to the area data; input the area image into a 3DU network structure with a sparse connection module and a multi-level residual module; use the 3DU The type network structure performs precise recognition on the regional image, and obtains a second image that has been refined through the 3DU type network structure; and analyzes the second image to generate the target organ image.
进一步地,第二识别模块13,具体用于:利用所述3DU型网络结构对所述区域数据进行上采样,获取第一特征图;将所述第一特征图输入所述稀疏连接模块,以获取第二特征图;将所述第二特征图输入所述多级残差模块,以获取第三特征图;对所述第三特征图进行上采样,以获取所述第二图像。Further, the second identification module 13 is specifically configured to: use the 3DU network structure to up-sample the area data to obtain a first feature map; and input the first feature map into the sparse connection module to Obtain a second feature map; input the second feature map to the multi-level residual module to obtain a third feature map; perform up-sampling on the third feature map to obtain the second image.
进一步地,所述稀疏连接模块包括四个级联分支。Further, the sparse connection module includes four cascaded branches.
进一步地,所述多级残差模块采用三种尺度的池化操作。Further, the multi-level residual module adopts three-scale pooling operations.
进一步地,标识模块14,还用于:获取对所述目标器官的选择指令,其中,所述选择指令包含对所述目标器官的选取位置;根据所述选择指令,从所述原始图像中提取与所述选取位置对应的目标原始图像;根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画及三维重建。Further, the identification module 14 is further configured to: obtain a selection instruction for the target organ, wherein the selection instruction includes a selection position of the target organ; and extract from the original image according to the selection instruction The target original image corresponding to the selected position; according to the first data and the second data, two-dimensional delineation and three-dimensional reconstruction of the target organ and the tumor on the target organ are performed.
进一步地,标识模块14,还用于:根据所述选择指令,从所述原始图像中提取与所述选取位置对应且符合所述图像角度的目标原始图像。Further, the identification module 14 is further configured to: according to the selection instruction, extract a target original image corresponding to the selected position and conforming to the image angle from the original image.
需要说明的是,前述对肿瘤图像的处理方法实施例的解释说明也适用于该实施例的肿瘤图像的处理装置,此处不再赘述。It should be noted that the foregoing explanation of the embodiment of the tumor image processing method is also applicable to the tumor image processing device of this embodiment, and will not be repeated here.
基于上述实施例,本发明实施例还提供了一种电子设备,括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现前述的肿瘤图像的处理方法。Based on the foregoing embodiments, the embodiments of the present invention also provide an electronic device, including a memory, a processor, and a computer program stored on the memory and running on the processor. When the processor executes the program, the foregoing The processing method of the tumor image.
为了实现上述实施例,本发明还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述的肿瘤图像的处理方法。In order to implement the above-mentioned embodiments, the present invention also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the aforementioned tumor image processing method is realized.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, descriptions with reference to the terms "one embodiment", "some embodiments", "examples", "specific examples", or "some examples" etc. mean specific features described in conjunction with the embodiment or example , Structures, materials or features are included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the above-mentioned terms do not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials or characteristics can be combined in any one or more embodiments or examples in a suitable manner. In addition, those skilled in the art can combine and combine the different embodiments or examples and the features of the different embodiments or examples described in this specification without contradicting each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with "first" and "second" may explicitly or implicitly include at least one of the features. In the description of the present invention, "a plurality of" means at least two, such as two, three, etc., unless otherwise specifically defined.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method description in the flowchart or described in other ways herein can be understood as a module, segment or part of code that includes one or more executable instructions for implementing custom logic functions or steps of the process , And the scope of the preferred embodiments of the present invention includes additional implementations, which may not be in the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order according to the functions involved. This should It is understood by those skilled in the art to which the embodiments of the present invention belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowchart or described in other ways herein, for example, can be considered as a sequenced list of executable instructions for realizing logic functions, and can be embodied in any computer-readable medium, For use by instruction execution systems, devices, or equipment (such as computer-based systems, systems including processors, or other systems that can fetch instructions from and execute instructions from instruction execution systems, devices, or equipment), or combine these instruction execution systems, devices Or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because it can be used, for example, by optically scanning the paper or other medium, and then editing, interpreting, or other suitable media if necessary. The program is processed in a way to obtain the program electronically and then stored in the computer memory.
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施 方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that each part of the present invention can be implemented by hardware, software, firmware, or a combination thereof. In the above embodiments, multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if it is implemented by hardware as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: Discrete logic gate circuits for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate array (PGA), field programmable gate array (FPGA), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried in the method of the foregoing embodiments can be implemented by a program instructing relevant hardware to complete. The program can be stored in a computer-readable storage medium. When executed, it includes one of the steps of the method embodiment or a combination thereof.
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, the functional units in the various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc. Although the embodiments of the present invention have been shown and described above, it can be understood that the above-mentioned embodiments are exemplary and should not be construed as limiting the present invention. Those of ordinary skill in the art can comment on the above-mentioned embodiments within the scope of the present invention. The embodiment undergoes changes, modifications, substitutions, and modifications.

Claims (13)

  1. 一种肿瘤图像的处理方法,其特征在于,包括以下步骤:A method for processing tumor images, which is characterized in that it comprises the following steps:
    获取目标器官的原始图像;Obtain the original image of the target organ;
    对所述原始图像进行粗识别,以从所述原始图像中得到所述目标器官所在区域的区域数据;Performing rough recognition on the original image to obtain area data of the area where the target organ is located from the original image;
    对所述区域数据进行精识别,获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据;Performing precise recognition on the region data, and acquiring the first data of the target organ and the second data of the tumor on the target organ;
    根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画和/或三维重建。According to the first data and the second data, two-dimensional delineation and/or three-dimensional reconstruction are performed on the target organ and the tumor on the target organ.
  2. 根据权利要求1所述的肿瘤图像的处理方法,其特征在于,所述对所述原始图像进行粗识别,以从所述原始图像中得到所述目标器官所在区域的区域数据,包括:The method for processing a tumor image according to claim 1, wherein the performing rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image comprises:
    将所述原始图像输入具有三层阶跃连接的U型网络结构,以得到第一图像;Inputting the original image into a U-shaped network structure with three-layer step connections to obtain a first image;
    对所述第一图像中的各位置点进行三维坐标映射,以得到所述各位置点的空间坐标;Performing three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinate of each position point;
    根据所述空间坐标确定所述目标器官的区域数据。The area data of the target organ is determined according to the space coordinates.
  3. 根据权利要求2所述的肿瘤图像的处理方法,其特征在于,所述根据所述空间坐标确定所述目标器官的区域数据,包括:The tumor image processing method according to claim 2, wherein the determining the area data of the target organ according to the spatial coordinates comprises:
    对所述空间坐标进行聚类分析,获取所述第一图像中所述各位置点对应的重心位置;Performing cluster analysis on the spatial coordinates to obtain the center of gravity position corresponding to each position point in the first image;
    获取所述各位置点与所述重心位置之间的第一距离;Acquiring the first distance between each of the position points and the position of the center of gravity;
    识别所述第一距离中的最大值,并将所述最大值作为剪切半径;Identifying the maximum value in the first distance, and using the maximum value as the shear radius;
    按照所述剪切半径对所述重心位置进行剪切,以使剪切后的区域作为所述目标器官的区域数据。The position of the center of gravity is cut according to the cutting radius, so that the cut area is used as the area data of the target organ.
  4. 根据权利要求3所述的肿瘤图像的处理方法,其特征在于,所述识别所述第一距离中的最大值,并将所述最大值作为剪切半径,包括:The tumor image processing method according to claim 3, wherein the identifying the maximum value in the first distance and using the maximum value as a cutting radius comprises:
    获取所述剪切半径的冗余量,将所述最大值和所述冗余量的和,作为所述剪切半径。Obtain the redundancy of the shearing radius, and use the sum of the maximum value and the redundancy as the shearing radius.
  5. 根据权利要求1所述的肿瘤图像的处理方法,其特征在于,所述对所述区域数据进行精识别,获取所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据,包括:The method for processing a tumor image according to claim 1, wherein the precise recognition of the region data is performed to obtain the first data of the target organ and the second data of the tumor on the target organ, include:
    获取所述区域数据对应的区域图像;Acquiring a region image corresponding to the region data;
    将所述区域图像,输入具有稀疏连接模块和多级残差模块的3DU型网络结构;Input the region image into a 3DU network structure with a sparse connection module and a multi-level residual module;
    利用所述3DU型网络结构对所述区域图像进行精识别,获取经过所述3DU型网络结构进 行精识别的第二图像;Using the 3DU-type network structure to perform fine recognition on the regional image, and obtain a second image that has been finely recognized through the 3DU-type network structure;
    对所述第二图像进行解析,以获取包含所述目标器官的第一数据以及所述目标器官上的肿瘤的第二数据。The second image is analyzed to obtain the first data including the target organ and the second data including the tumor on the target organ.
  6. 根据所述权利要求5所述的肿瘤图像的处理方法,其特征在于,所述利用所述3DU型网络结构对所述区域图像进行精识别,获取经过所述3DU型网络结构进行精识别的第二图像,包括:The method for processing tumor images according to claim 5, wherein the 3DU network structure is used to perform fine recognition on the area image, and the second image that has been finely recognized through the 3DU network structure is obtained. Two images, including:
    利用所述3DU型网络结构对所述区域数据进行上采样,获取第一特征图;Up-sampling the regional data by using the 3DU-type network structure to obtain a first feature map;
    将所述第一特征图输入所述稀疏连接模块,以获取第二特征图;Input the first feature map to the sparse connection module to obtain a second feature map;
    将所述第二特征图输入所述多级残差模块,以获取第三特征图;Input the second feature map to the multi-level residual module to obtain a third feature map;
    对所述第三特征图进行上采样,以获取所述第二图像。Up-sampling the third feature map to obtain the second image.
  7. 根据权利要求5或6所述的肿瘤图像的处理方法,其特征在于,所述稀疏连接模块包括四个级联分支。The tumor image processing method according to claim 5 or 6, wherein the sparse connection module includes four cascaded branches.
  8. 根据权利要求5或6所述的肿瘤图像的处理方法,其特征在于,所述多级残差模块采用三种尺度的池化操作。The tumor image processing method according to claim 5 or 6, wherein the multi-level residual module adopts three-scale pooling operations.
  9. 根据权利要求1所述的肿瘤图像的处理方法,其特征在于,所述根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画,包括:The method for processing a tumor image according to claim 1, wherein the two-dimensional delineation of the target organ and the tumor on the target organ is performed according to the first data and the second data, include:
    获取对所述目标器官的选择指令,其中,所述选择指令包含对所述目标器官的选取位置;Acquiring a selection instruction for the target organ, wherein the selection instruction includes a selection position for the target organ;
    根据所述选择指令,从所述原始图像中提取与所述选取位置对应的目标原始图像;Extracting a target original image corresponding to the selected position from the original image according to the selection instruction;
    根据所述第一数据和所述第二数据,在所述原始图像上对所述目标器官和所述目标器官上的肿瘤进行二维勾画。According to the first data and the second data, the target organ and the tumor on the target organ are two-dimensionally delineated on the original image.
  10. 根据权利要求9所述的肿瘤图像的处理方法,其特征在于,所述选取指令还包括图像角度,所述根据所述选择指令,从所述原始图像中提取与所述选取位置对应的目标原始图像,包括:The method for processing a tumor image according to claim 9, wherein the selection instruction further includes an image angle, and the target original image corresponding to the selected position is extracted from the original image according to the selection instruction. Images, including:
    根据所述选择指令,从所述原始图像中提取与所述选取位置对应且符合所述图像角度的目标原始图像。According to the selection instruction, a target original image corresponding to the selected position and conforming to the image angle is extracted from the original image.
  11. 一种肿瘤图像的处理装置,其特征在于,包括:A tumor image processing device, which is characterized in that it comprises:
    获取模块,用于获取针对目标器官扫描的原始图像;The acquisition module is used to acquire the original image scanned for the target organ;
    第一识别模块,用于对所述原始图像进行粗识别,以从所述原始图像中得到目标器官所在区域的区域数据;The first recognition module is configured to perform rough recognition on the original image to obtain the area data of the area where the target organ is located from the original image;
    第二识别模块,用于对所述区域数据进行精识别,获取所述目标器官的第一数据以及所 述目标器官上的肿瘤的第二数据;The second recognition module is used to perform precise recognition on the regional data, and obtain the first data of the target organ and the second data of the tumor on the target organ;
    标识模块,用于根据所述第一数据和所述第二数据,对所述目标器官和所述目标器官上的肿瘤进行二维勾画及三维重建。The identification module is used to perform two-dimensional delineation and three-dimensional reconstruction of the target organ and the tumor on the target organ according to the first data and the second data.
  12. 一种电子设备,其特征在于,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时,实现如权利要求1-10中任一所述的肿瘤图像的处理方法。An electronic device, characterized in that it comprises a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor executes the program to achieve the following: 10. The tumor image processing method described in any one of 10.
  13. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-10中任一所述的肿瘤图像的处理方法。A computer-readable storage medium having a computer program stored thereon, wherein the program is executed by a processor to realize the tumor image processing method according to any one of claims 1-10.
PCT/CN2021/086139 2020-05-29 2021-04-09 Tumor image processing method and apparatus, electronic device, and storage medium WO2021238438A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010474294.7A CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium
CN202010474294.7 2020-05-29

Publications (1)

Publication Number Publication Date
WO2021238438A1 true WO2021238438A1 (en) 2021-12-02

Family

ID=72331191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086139 WO2021238438A1 (en) 2020-05-29 2021-04-09 Tumor image processing method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN111640100B (en)
WO (1) WO2021238438A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100185A (en) * 2022-07-22 2022-09-23 深圳市联影高端医疗装备创新研究院 Image processing method, image processing device, computer equipment and storage medium
CN115482463A (en) * 2022-09-01 2022-12-16 北京低碳清洁能源研究院 Method and system for identifying land cover of mine area of generated confrontation network
CN115908363A (en) * 2022-12-07 2023-04-04 赛维森(广州)医疗科技服务有限公司 Tumor cell counting method, device, equipment and storage medium
CN115919464A (en) * 2023-03-02 2023-04-07 四川爱麓智能科技有限公司 Tumor positioning method, system and device and tumor development prediction method
CN116740768A (en) * 2023-08-11 2023-09-12 南京诺源医疗器械有限公司 Navigation visualization method, system, equipment and storage medium based on nasoscope
CN117838306A (en) * 2024-02-01 2024-04-09 南京诺源医疗器械有限公司 Target image processing method and system based on imager

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640100B (en) * 2020-05-29 2023-12-12 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium
CN112767347A (en) * 2021-01-18 2021-05-07 上海商汤智能科技有限公司 Image registration method and device, electronic equipment and storage medium
CN115147378B (en) * 2022-07-05 2023-07-25 哈尔滨医科大学 CT image analysis and extraction method
CN115300809B (en) * 2022-07-27 2023-10-24 北京清华长庚医院 Image processing method and device, computer equipment and storage medium
CN115861298B (en) * 2023-02-15 2023-05-23 浙江华诺康科技有限公司 Image processing method and device based on endoscopic visualization
CN117059235A (en) * 2023-08-17 2023-11-14 经智信息科技(山东)有限公司 Automatic rendering method and device for CT image
CN117152442B (en) * 2023-10-27 2024-02-02 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110889853A (en) * 2018-09-07 2020-03-17 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN111062955A (en) * 2020-03-18 2020-04-24 天津精诊医疗科技有限公司 Lung CT image data segmentation method and system
CN111127444A (en) * 2019-12-26 2020-05-08 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111640100A (en) * 2020-05-29 2020-09-08 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006013476B4 (en) * 2006-03-23 2012-11-15 Siemens Ag Method for positionally accurate representation of tissue regions of interest
US8238637B2 (en) * 2006-10-25 2012-08-07 Siemens Computer Aided Diagnosis Ltd. Computer-aided diagnosis of malignancies of suspect regions and false positives in images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110889853A (en) * 2018-09-07 2020-03-17 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN111127444A (en) * 2019-12-26 2020-05-08 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111062955A (en) * 2020-03-18 2020-04-24 天津精诊医疗科技有限公司 Lung CT image data segmentation method and system
CN111640100A (en) * 2020-05-29 2020-09-08 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100185A (en) * 2022-07-22 2022-09-23 深圳市联影高端医疗装备创新研究院 Image processing method, image processing device, computer equipment and storage medium
CN115482463A (en) * 2022-09-01 2022-12-16 北京低碳清洁能源研究院 Method and system for identifying land cover of mine area of generated confrontation network
CN115908363A (en) * 2022-12-07 2023-04-04 赛维森(广州)医疗科技服务有限公司 Tumor cell counting method, device, equipment and storage medium
CN115908363B (en) * 2022-12-07 2023-09-22 赛维森(广州)医疗科技服务有限公司 Tumor cell statistics method, device, equipment and storage medium
CN115919464A (en) * 2023-03-02 2023-04-07 四川爱麓智能科技有限公司 Tumor positioning method, system and device and tumor development prediction method
CN116740768A (en) * 2023-08-11 2023-09-12 南京诺源医疗器械有限公司 Navigation visualization method, system, equipment and storage medium based on nasoscope
CN116740768B (en) * 2023-08-11 2023-10-20 南京诺源医疗器械有限公司 Navigation visualization method, system, equipment and storage medium based on nasoscope
CN117838306A (en) * 2024-02-01 2024-04-09 南京诺源医疗器械有限公司 Target image processing method and system based on imager

Also Published As

Publication number Publication date
CN111640100B (en) 2023-12-12
CN111640100A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
WO2021238438A1 (en) Tumor image processing method and apparatus, electronic device, and storage medium
CN111311592B (en) Three-dimensional medical image automatic segmentation method based on deep learning
WO2020119679A1 (en) Three-dimensional left atrium segmentation method and apparatus, terminal device, and storage medium
WO2023221954A1 (en) Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
CN108428233B (en) Knowledge-based automatic image segmentation
CN112150524B (en) Two-dimensional and three-dimensional medical image registration method and system based on deep learning
CN112509119B (en) Spatial data processing and positioning method and device for temporal bone and electronic equipment
CN111179237A (en) Image segmentation method and device for liver and liver tumor
US11798161B2 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
CN116503607B (en) CT image segmentation method and system based on deep learning
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN116228787A (en) Image sketching method, device, computer equipment and storage medium
Akulian et al. A direct comparative study of bronchoscopic navigation planning platforms for peripheral lung navigation: the ATLAS study
CN116797612B (en) Ultrasonic image segmentation method and device based on weak supervision depth activity contour model
CN116051470A (en) Liver CT postoperative tumor segmentation method and device based on data enhancement
JP2023027751A (en) Medical image processing device and medical image processing method
Xu et al. Automatic segmentation of orbital wall from CT images via a thin wall region supervision-based multi-scale feature search network
EP4165593A1 (en) Image segmentation for sets of objects
Pan et al. Automatic annotation of liver computed tomography images based on a vessel‐skeletonization method
JP5403431B2 (en) Tomographic image processing method and apparatus
Perera-Bel et al. Segmentation of the placenta and its vascular tree in Doppler ultrasound for fetal surgery planning
CN117831757B (en) Pathological CT multi-mode priori knowledge-guided lung cancer diagnosis method and system
CN116385756B (en) Medical image recognition method and related device based on enhancement annotation and deep learning
CN116309449B (en) Image processing method, device, equipment and storage medium
CN117408908B (en) Preoperative and intraoperative CT image automatic fusion method based on deep neural network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21813985

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21813985

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21813985

Country of ref document: EP

Kind code of ref document: A1