CN111640100A - Tumor image processing method and device, electronic equipment and storage medium - Google Patents

Tumor image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111640100A
CN111640100A CN202010474294.7A CN202010474294A CN111640100A CN 111640100 A CN111640100 A CN 111640100A CN 202010474294 A CN202010474294 A CN 202010474294A CN 111640100 A CN111640100 A CN 111640100A
Authority
CN
China
Prior art keywords
target organ
data
image
tumor
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010474294.7A
Other languages
Chinese (zh)
Other versions
CN111640100B (en
Inventor
王斯凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010474294.7A priority Critical patent/CN111640100B/en
Publication of CN111640100A publication Critical patent/CN111640100A/en
Priority to PCT/CN2021/086139 priority patent/WO2021238438A1/en
Application granted granted Critical
Publication of CN111640100B publication Critical patent/CN111640100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention provides a tumor image processing method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an original image scanned for a target organ; roughly identifying the original image to obtain region data of the region where the target organ is located from the original image; performing fine identification on the region data to acquire first data of the target organ and second data of a tumor on the target organ; according to the first data and the second data, two-dimensional delineation and/or three-dimensional reconstruction are/is carried out on the target organ and the tumor on the target organ, so that a visual result of the target organ and the tumor on the target organ with higher precision can be obtained, a doctor can more accurately determine information such as the position, the size, the shape and the like of the target organ and the tumor on the target organ according to the identification image processed by the application, and diagnosis and treatment are facilitated.

Description

Tumor image processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for processing a tumor image, an electronic device, and a storage medium.
Background
At present, CT scanning is mainly used for tumor examination by doctors, DICOM data with the size of at least 500M is generated in one CT scanning, more than 300 tomographic images of each DICOM data require doctors to examine one by one, and the time is more than 30 minutes. If the diagnosis is confirmed, a three-dimensional reconstruction needs to be performed by the imaging physician to assist the clinician in performing the surgical planning, which usually takes more than 60 minutes. The doctor has high labor intensity and low efficiency. With the development of image processing technology, image processing is increasingly applied to the processing of medical images to reduce the time consumption of doctors for examination one by one.
Disclosure of Invention
The embodiment of the disclosure provides a method for processing a tumor image, which includes the following steps: acquiring an original image of a target organ; roughly identifying the original image to obtain region data of a region where a target organ is located from the original image; performing fine identification on the region data to acquire first data of the target organ and second data of a tumor on the target organ; and according to the first data and the second data, performing two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ.
In some embodiments, the coarsely identifying the original image to obtain region data of a region where a target organ is located from the original image includes: inputting the original image into a U-shaped network structure with three-layer step connection to obtain a first image; carrying out three-dimensional coordinate mapping on each position point in the first image to obtain a space coordinate of each position point; and determining the region data of the target organ according to the space coordinates.
In some embodiments, said determining region data of said target organ from said spatial coordinates comprises: performing cluster analysis on the space coordinates to obtain the gravity center position corresponding to each position point in the first image; acquiring a first distance between each position point and the gravity center position; identifying a maximum value in the first distances and using the maximum value as a shear radius; and shearing the position of the center of gravity according to the shearing radius, so that the sheared area is used as the area data of the target organ.
In some embodiments, said identifying a maximum of said first distances and using said maximum as a shear radius comprises: and acquiring the redundancy quantity of the shearing radius, and taking the sum of the maximum value and the redundancy quantity as the shearing radius.
In some embodiments, the fine identifying the region data, and the obtaining the first data of the target organ and the second data of the tumor on the target organ, includes: acquiring a region image corresponding to the region data; inputting the region image into a 3DU type network structure with a sparse connection module and a multi-stage residual error module; utilizing the 3DU type network structure to perform precise identification on the area image, and acquiring a second image which is subjected to the precise identification through the 3DU type network structure; the second image is parsed to obtain first data of the target organ and second data of a tumor on the target organ.
In some embodiments, the acquiring a second image that is precisely identified through the 3DU type network structure includes: utilizing the 3DU type network structure to carry out up-sampling on the area data to obtain a first characteristic diagram; inputting the first feature map into the sparse connection module to obtain a second feature map; inputting the second feature map into the multi-stage residual error module to obtain a third feature map; and upsampling the third feature map to acquire the second image.
In some embodiments, the sparse connection module comprises four cascaded branches.
In some embodiments, the multi-level residual module employs a pooling operation of three scales.
In some embodiments, said two-dimensional delineation of the target organ and the tumor on the target organ from the first data and the second data comprises: acquiring a selection instruction of the target organ, wherein the selection instruction comprises a selection position of the target organ; extracting a target original image corresponding to the selected position from the original image according to the selection instruction; and according to the first data and the second data, performing two-dimensional delineation on the target organ and the tumor on the target organ.
In some embodiments, the selecting instruction further includes an image angle, and the extracting, according to the selecting instruction, the target original image corresponding to the selected position from the original image includes: and extracting a target original image which corresponds to the selected position and accords with the image angle from the original image according to the selection instruction.
According to the tumor image processing method provided by the application, the original image is firstly identified in a rough manner and then identified in a fine manner twice, so that the target organ with higher precision and the tumor on the target organ can be obtained, and the target organ and the tumor on the target organ can be subjected to two-dimensional drawing and/or three-dimensional reconstruction, so that a doctor can more accurately determine the position, size, shape and other information of the target organ and the tumor on the target organ according to the identification image processed by the application, and the diagnosis and treatment are facilitated.
The embodiment of the present disclosure provides a processing apparatus for a tumor image, including: the acquisition module is used for acquiring an original image shot aiming at a target organ; the first identification module is used for carrying out coarse identification on the original image so as to obtain the region data of the region where the target organ is located from the original image; the second identification module is used for carrying out fine identification on the region data to acquire first data of the target organ and second data of a tumor on the target organ; and the identification module is used for performing two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
The embodiment of the present disclosure provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the above tumor image processing method.
The disclosed embodiments provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the above-described method of processing a tumor image.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for processing a tumor image according to an embodiment of the present invention;
FIG. 2 is a flow chart of another exemplary method for processing a tumor image according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a simplified Unet model according to an embodiment of the present invention;
FIG. 4 is a flow chart of another exemplary method for processing a tumor image according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating an effect of the simplified Unet model after processing according to the embodiment of the present invention;
FIG. 6 is a flowchart of another exemplary method for processing a tumor image according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a 3DU type network structure having a sparse connection module and a multi-level residual module according to an embodiment of the present invention;
FIG. 8 is a flowchart of another exemplary method for processing a tumor image according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a sparse connection module according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a multi-stage residual error module according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a training sample set according to an embodiment of the present invention;
FIG. 12 is a flowchart of a method for processing a tumor image according to another embodiment of the present invention;
FIG. 13 is a schematic diagram of a two-dimensional delineation of a target organ and a tumor thereon in accordance with an embodiment of the present invention;
FIG. 14 is a schematic diagram of a two-dimensional delineation and a three-dimensional reconstruction of a target organ and a tumor thereon according to another embodiment of the present invention;
fig. 15 is a block diagram of a tumor image processing apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A method and apparatus for processing a tumor image, an electronic device, and a storage medium according to embodiments of the present invention will be described below with reference to the drawings.
Fig. 1 is a flowchart illustrating a method for processing a tumor image according to an embodiment of the present invention. As shown in fig. 1, the method for processing a tumor image according to an embodiment of the present invention includes the following steps:
s101: an original image of the target organ is acquired.
The original image is, for example, a CT (Computed Tomography) image taken of a target organ of a patient, and one CT scan generates at least DICOM (Digital Imaging and communications in Medicine) data with a size of 500M or more, and each CT scan generates about 300 or more tomographic images.
S102: and carrying out coarse identification on the original image so as to obtain the region data of the region where the target organ is located from the original image.
S103: and performing fine identification on the region data to acquire first data of the target organ and second data of the tumor on the target organ.
The first data may include information such as a position and a size of the target organ, and the second information may include information such as a position, a size, and a shape of the tumor.
S104: and performing two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
Therefore, according to the tumor image processing method provided by the application, the original image is firstly identified twice in a rough and then in a fine mode, so that the target organ with higher precision and the tumor on the target organ can be obtained, and the target organ and the tumor on the target organ can be subjected to two-dimensional delineation and/or three-dimensional reconstruction, so that a doctor can more accurately determine the position, size, shape and other information of the target organ and the tumor on the target organ according to the identification image processed by the application, and the diagnosis and treatment are facilitated.
According to another embodiment of the invention, as shown in fig. 2, the coarse recognition of the original image to obtain the region data of the region where the target organ is located from the original image includes:
s201: the original image is input into a U-type network structure with three-layer step connections to obtain a first image.
It should be noted that, in the present application, for example, in order to simplify the Unet model, where the Unet model has the characteristics of a simple structure and a tight combination of depth semantics, the Unet model is optimized in the present application, and as shown in fig. 3, the fourth layer in the traditional Unet model structure is removed and adjusted to be a three-layer step connection, so that 1/2 parameters can be reduced in the model, and the operation efficiency is effectively improved.
It should be noted that, because the whole image recognition is performed by using the CT scan image, a large error may be generated, especially for example, kidney tumor recognition, a large amount of background space may be left outside both kidneys, which not only increases the amount of calculation for image processing, but also affects the accuracy of the recognition result. Therefore, the image acquired through the U-shaped network structure is further processed to determine the area data of the area where the target organ is located, namely, the input data of the subsequent precise identification is the data which only contains the area where the target organ is located and has the excision irrelevant information, and the data processing amount of the precise identification is effectively reduced. In the case of a kidney, the input data for fine recognition may be region data including only a left kidney and a right kidney.
S202: and carrying out three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinate of each position point.
S203: and determining the region data of the target organ according to the space coordinates.
Specifically, as shown in fig. 4, determining region data of the target organ according to the spatial coordinates includes:
s301: and carrying out cluster analysis on the space coordinates to obtain the gravity center position corresponding to each position point in the first image.
S302: a first distance between each location point and the position of the center of gravity is obtained.
S303: the maximum value in the first distance is identified and taken as the shear radius.
S304: the center position is clipped according to the clipping radius so that the clipped region is used as the region data of the target organ.
It should be understood that, when the target organ is a kidney, since a person has two kidneys, the center of gravity positions may be two, that is, after the clustering analysis, two center of gravity positions may be found according to the clustering result, so as to cut out the region data of two target organs according to the clipping, wherein the clustering effect may be as shown in fig. 5.
Further, to avoid shearing errors, the elastic scale, i.e., the redundancy of the shearing radius, may also be increased, wherein the redundancy may be L2=LCT1% of, wherein LCTMay be the size of the original image, with the cropping size L ═ L1+L2Wherein L is1May be the distance of the position of the center of gravity from this point.
It should be understood that, because a target organ portion such as a kidney portion included in one example of CT data only accounts for 10% of the overall data, and the addition of a large amount of irrelevant information may cause model overfitting, which increases the difficulty in identifying the kidney and the tumor, the coarse identification and removal of irrelevant information provided by the present application may improve the purity of the data, so that the trained model may have a better identification effect.
Therefore, the regional data of the target organ can be extracted through rough identification, so that the data processing amount is reduced for the subsequent precise identification process, and the image processing precision and the operation speed are effectively improved.
According to another embodiment of the present application, as shown in fig. 6, the fine identifying the region data to generate the target organ image containing the tumor data comprises:
s401: and acquiring a region image corresponding to the region data.
The region data may be a data block containing information on a lesion and a tumor acquired through rough identification, and since the 3DU network is an image processing network, the data block needs to be imaged for fine identification.
S402: the region image is input into a 3DU type network structure with a sparse connection module and a multi-stage residual error module.
It should be noted that the 3DU type network structure with the sparse connection module (S module) and the multi-level residual module (P module) proposed in the present application can be expressed as an SP-3DUnet model, that is, the sparse connection module (S module) and the multi-level residual module (P module) are added to the 3DUnet model, and specifically, the sparse connection module (S module) and the multi-level residual module (P module) are added to the bottom layer of the 3DUnet model, that is, after the down-sampling and before the up-sampling of the 3DUnet model.
Further, the 3DUnet model is optimized, namely, one convolution operation is removed from each convolution layer on the basis of the original model to reduce 1/2 parameters and reduce the model volume, but the operation can correspondingly reduce the extraction capacity of the model to deep semantics of the image, so that a sparse connection module (S module) and a multi-level residual error module (P module) are further added to the bottom layer of the optimized 3DU model, as shown in fig. 7, to resist the loss of semantic expression capacity, and the operation can improve the algorithm identification accuracy and the operation efficiency on the premise of only increasing 1/5 parameters.
S403: and carrying out fine identification on the area image by using the 3DU type network structure, and acquiring a second image which is subjected to fine identification through the 3DU type network structure.
As a specific embodiment, as shown in fig. 8-10, the step S403 acquires the second image subjected to fine recognition through the 3DU type network structure, including:
s4031: the area data is up-sampled using a 3DU type network structure to obtain a first profile.
S4032: and inputting the first feature map into a sparse connection module to obtain a second feature map.
It should be noted that the present application proposes to encode the advanced semantic feature mapping by sparse connection based on hole convolution, wherein the hole convolution is stacked in a cascading manner. Specifically, in the embodiment of the present application, the sparse connection module is divided into four cascaded branches.
As a specific example, as shown in fig. 9, due to the requirement of the subsequent three-dimensional model for the output data and the limitation of the hardware performance of the actual GPU (Graphics Processing Unit) of the present application, two hole convolutions of 1 × 1 and 3 × 3 may be used, and the sense domains corresponding to each branch are 3 × 3, 7 × 7, and 9 × 9.
It should be appreciated that when using a sparsely connected module, convolution of a large perceptual domain may extract and generate more abstract features for large targets, while convolution of a small perceptual domain may extract and generate more abstract features for small targets. By combining the hole convolutions with different expansion rates, the sparse connection module can extract the features of targets with various sizes, establish a sparse connection mode of key semantic features at the bottom layer, and offset the semantic information description loss caused by reducing convolution operation.
S4033: and inputting the second feature map into a multi-stage residual error module to obtain a third feature map.
It should be noted that, due to the fact that tumors on the target organ are of different sizes, the size of the tumor of the late stage patient may exceed 2/3, and when the size of the tumor is larger or smaller than the size covered by the current data set, the performance of the algorithm may be degraded. The multi-stage residual error module can extract feature information of different scales by adopting pooling operation of different scales, and the recognition capability of targets of different sizes is improved.
As a specific example, as shown in fig. 10, three scales of pooling operations are used, and three branches output three scales of feature fields. To reduce the dimensionality of the weights and computational cost, a 1 x 1 convolution can also be used after each pooling branch, which can reduce the size of the feature domain to 1/N of the original size, where N represents the number of channels in the original feature domain.
S4034: and upsampling the third feature map to acquire a second image.
And obtaining the same size characteristic as the original characteristic diagram by bilinear interpolation on the third characteristic diagram reduced to 1/N of the original size.
S404: the second image is parsed to obtain first data comprising the target organ and second data of a tumor on the target organ.
That is, the image after the fine recognition of the image through the 3DU type network structure having the sparse connection module and the multi-level residual module is a feature image, and needs to be analyzed to acquire first data including the target organ and second data of the tumor on the target organ. For example, the position of the target organ can be marked as 1, the position of the tumor on the target organ can be marked as 2, and other positions can be marked as 0, so that the position set of the target organ is the first data of the target organ, and the position and size of the target organ can be expressed.
It should be noted that the simplified U-network structure used for the coarse recognition and the fine recognition and the 3 DU-network structure having the sparse connection module and the multi-level residual error module both need deep learning training to accurately recognize the first data of the target organ and the second data of the tumor on the target organ during use.
Furthermore, in the training process, a plurality of original images with the positions of the target organ and the tumor marked manually can be used as a training sample set for inputting, wherein in order to further make the training result more accurate, the training sample set can be expanded by reversing, translating, rotating, deforming and the like the training sample set marked manually.
The method adopts a Dice coefficient loss function, wherein the Dice coefficient is specifically expressed by the following formula:
Figure BDA0002515336940000071
wherein N is a pixel number scaleLabel, p (k, i) ∈ [0,1 ]]And g (k, i) ∈ {0,1} respectively represents the prediction probability and the true label of class k, p (k, i) is the pre-stored probability of each pixel point in the analyzed target organ image, k is the category, ∑kω k is a weight, and in the embodiment of the present application, may be set
Figure BDA0002515336940000072
The loss function using the Dice coefficient may be: lloss ═ Ldice + Lreg where Lreg represents a regularization loss term (also known as weight decay) to avoid overfitting. .
It should also be understood that, when deep learning is required for both the simplified U-type network structure and the 3 DU-type network structure having the sparse connection module and the multi-level residual error module, the accuracy of image recognition can be verified by the manual inspection of the doctor, and if a recognition error occurs, the correction is performed by the brush function, and the original recognition result is replaced by the correction result, for example, the correction result is automatically returned to the cloud training database to update the training sample set, so as to retrain the model, as shown in fig. 11.
As another possible embodiment, as shown in fig. 12, the two-dimensional delineation of the target organ and the tumor on the target organ is performed according to the first data and the second data, and includes:
s501: selection instructions for the target organ are obtained.
Wherein the selection instruction comprises a selected position of the target organ.
S502: and extracting a target original image corresponding to the selected position from the original images according to the selection instruction.
In some embodiments, the selection instruction may further include an image angle, and thus, a target original image corresponding to the selected position and conforming to the image angle needs to be extracted from the original image.
S503: and according to the first data and the second data, performing two-dimensional delineation on the target organ and the tumor on the target organ on the original image.
That is, after the original image of the target organ is identified, the target organ and the tumor on the target organ may be two-dimensionally delineated on the original image according to the first data and the second data, so that the doctor and the patient can clearly know the lesion condition. Simultaneously, can help the doctor to concentrate the attention and discern to the picture that contains the kidney when carrying out medical diagnosis according to the CT image, effectively promote to diagnose efficiency and realize the purpose of accurate diagnosis, can prevent moreover to miss diagnosing. Further, the visualization degree during doctor-patient communication can be effectively improved by three-dimensional reconstruction of the target organ and the tumor on the target organ.
Specifically, a selection instruction of a target organ can be acquired, a position and an angle selected by a doctor and/or a patient are extracted from the selection instruction, a tomographic image corresponding to the position and the angle is extracted from an original image according to the position and the angle, whether each position point in the image belongs to first data and/or second data or not is judged, if yes, identification is carried out according to the first data and/or the second data, and if not, no operation is carried out.
For example, as shown in fig. 13, the left image is a tomographic image obtained by scanning a kidney at a top view angle in a state where a human body is standing upright, specifically, the image is an image corresponding to a 286 th layer of 520 tomographic images, and the right image is an image obtained by identifying a kidney and a kidney tumor in the left image. Further, the image angles may further include a side view angle and a rear view angle in the human body upright state, as shown in fig. 14, wherein the image at the upper left corner in fig. 14 is a sectional image at the side view angle in the human body upright state, the image at the upper right corner is a sectional image at the top view angle in the human body upright state, the image at the lower right corner is a sectional image at the rear view angle in the human body upright state, and the image at the lower left corner is a three-dimensional model stereogram of the target organ and the tumor thereon.
Or after the first data and the second data are obtained through fine recognition, the target organ and the tumor on the target organ are subjected to three-dimensional modeling according to the first data and the second data to obtain a three-dimensional model of the target organ and the tumor thereon, as shown in fig. 14, and then corresponding identification data are extracted from the three-dimensional model according to a selection instruction and two-dimensional delineation is performed on an original image.
It should be understood that, as shown in the lower left corner of fig. 14, the embodiment of the present application may also perform three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data, and directly display the three-dimensional reconstructed stereo image.
It should also be understood that the multiple images in fig. 13-14 described above may be combined when presented through a display terminal to facilitate correspondence between the multiple images presented simultaneously by the physician and/or the patient, i.e., morphology and size of the same target organ location and/or the same tumor location at different angles, etc.
Furthermore, according to the identified three-dimensional reconstruction model of the target organ and the tumor region, a doctor can accurately acquire the position and the form of the tumor at the target organ through a model result, so that the doctor can conveniently perform medical planning, such as operation planning, radiotherapy planning, chemotherapy planning and the like according to the three-dimensional reconstruction model.
For example, during operation planning, a doctor can be helped to find accurate positioning according to the tumor position determined by the method provided by the application, the operation time is reduced, the operation quality is improved, and the dislocation of an operation incision can be avoided due to accurate positioning, so that the size of the operation incision is reduced, the preset speed of the wound of a patient is accelerated, and the pain of the patient is reduced; during radiotherapy planning, the size and the position of the tumor can be determined by the method provided by the application, so that a doctor is helped to plan the radiotherapy ray intensity, and the influence on radiotherapy of a normal part is reduced; during chemotherapy planning, information such as tumor position, size and shape determined by the method provided by the application helps doctors plan the dosage of the medicine, and the influence on normal cells of patients is reduced, so that the treatment pain of the patients is relieved.
In summary, according to the processing method of the tumor image provided by the application, the original image is identified twice, i.e., roughly and finely, so that the target organ and the tumor thereon with higher precision can be obtained, and the target organ and the tumor thereon are subjected to two-dimensional delineation and/or three-dimensional reconstruction, so that a doctor can more accurately determine information such as the position, size and shape of the target organ according to the target organ image processed by the application, and diagnosis and treatment are facilitated.
In order to implement the above embodiments, the present invention further provides a tumor image processing apparatus.
Fig. 15 is a block diagram of a tumor image processing apparatus according to an embodiment of the present invention. As shown in fig. 15, the tumor image processing apparatus 10 includes: the system comprises an acquisition module 11, a first recognition module 12, a second recognition module 13 and an identification module 14.
The acquiring module 11 is configured to acquire an original image captured for a target organ.
A first identification module 12, configured to perform coarse identification on the original image to obtain region data of a region where a target organ is located from the original image.
A second identification module 13, configured to perform fine identification on the region data, and obtain first data of the target organ and second data of a tumor on the target organ.
And the identification module 14 is configured to perform two-dimensional delineation and three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
Further, the first identification module 12 is specifically configured to: inputting the original image into a U-shaped network structure with three-layer step connection to obtain a first image; carrying out three-dimensional coordinate mapping on each position point in the first image to obtain a space coordinate of each position point; and determining the region data of the target organ according to the space coordinates.
Further, the first identification module 12 is specifically configured to: performing cluster analysis on the space coordinates to obtain the gravity center position corresponding to each position point in the first image; acquiring a first distance between each position point and the gravity center position; identifying a maximum value in the first distances and using the maximum value as a shear radius; and shearing the position of the center of gravity according to the shearing radius, so that the sheared area is used as the area data of the target organ.
Further, the first identification module 12 is specifically configured to: and acquiring the redundancy quantity of the shearing radius, and taking the sum of the maximum value and the redundancy quantity as the shearing radius.
Further, the second identification module 13 is specifically configured to: acquiring a region image corresponding to the region data; inputting the region image into a 3DU type network structure with a sparse connection module and a multi-stage residual error module; utilizing the 3DU type network structure to perform precise identification on the area image, and acquiring a second image which is subjected to the precise identification through the 3DU type network structure; and analyzing the second image to generate the target organ image.
Further, the second identification module 13 is specifically configured to: utilizing the 3DU type network structure to carry out up-sampling on the area data to obtain a first characteristic diagram; inputting the first feature map into the sparse connection module to obtain a second feature map; inputting the second feature map into the multi-stage residual error module to obtain a third feature map; and upsampling the third feature map to acquire the second image.
Further, the sparse connection module comprises four cascaded branches.
Further, the multi-level residual module adopts three scales of pooling operations.
Further, the identification module 14 is further configured to: acquiring a selection instruction of the target organ, wherein the selection instruction comprises a selection position of the target organ; extracting a target original image corresponding to the selected position from the original image according to the selection instruction; and according to the first data and the second data, performing two-dimensional delineation and three-dimensional reconstruction on the target organ and the tumor on the target organ.
Further, the identification module 14 is further configured to: and extracting a target original image which corresponds to the selected position and accords with the image angle from the original image according to the selection instruction.
It should be noted that the foregoing explanation of the embodiment of the tumor image processing method is also applicable to the tumor image processing apparatus of this embodiment, and details are not repeated here.
Based on the foregoing embodiments, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the electronic device implements the foregoing tumor image processing method.
In order to implement the above embodiments, the present invention also proposes a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the aforementioned processing method of a tumor image.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (13)

1. A method of processing a tumor image, comprising the steps of:
acquiring an original image of a target organ;
roughly identifying the original image to obtain region data of the region where the target organ is located from the original image;
performing fine identification on the region data to acquire first data of the target organ and second data of a tumor on the target organ;
and according to the first data and the second data, performing two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ.
2. The method for processing the tumor image according to claim 1, wherein the roughly identifying the original image to obtain the region data of the region where the target organ is located from the original image comprises:
inputting the original image into a U-shaped network structure with three-layer step connection to obtain a first image;
carrying out three-dimensional coordinate mapping on each position point in the first image to obtain a space coordinate of each position point;
and determining the region data of the target organ according to the space coordinates.
3. The method of processing a tumor image according to claim 2, wherein said determining region data of the target organ according to the spatial coordinates comprises:
performing cluster analysis on the space coordinates to obtain the gravity center position corresponding to each position point in the first image;
acquiring a first distance between each position point and the gravity center position;
identifying a maximum value in the first distances and using the maximum value as a shear radius;
and shearing the position of the center of gravity according to the shearing radius, so that the sheared area is used as the area data of the target organ.
4. The method of processing a tumor image according to claim 3, wherein the identifying a maximum value of the first distances and using the maximum value as a shearing radius comprises:
and acquiring the redundancy quantity of the shearing radius, and taking the sum of the maximum value and the redundancy quantity as the shearing radius.
5. The method for processing the tumor image according to claim 1, wherein the performing the fine identification on the region data to obtain the first data of the target organ and the second data of the tumor on the target organ comprises:
acquiring a region image corresponding to the region data;
inputting the region image into a 3DU type network structure with a sparse connection module and a multi-stage residual error module;
utilizing the 3DU type network structure to perform precise identification on the area image, and acquiring a second image which is subjected to the precise identification through the 3DU type network structure;
the second image is parsed to obtain first data comprising the target organ and second data of a tumor on the target organ.
6. The method for processing tumor images according to claim 5, wherein the obtaining the second image that is precisely identified through the 3DU network structure by precisely identifying the area image through the 3DU network structure comprises:
utilizing the 3DU type network structure to carry out up-sampling on the area data to obtain a first characteristic diagram;
inputting the first feature map into the sparse connection module to obtain a second feature map;
inputting the second feature map into the multi-stage residual error module to obtain a third feature map;
and upsampling the third feature map to acquire the second image.
7. The method of processing a tumor image according to claim 5 or 6, wherein the sparse connection module comprises four cascaded branches.
8. The method of claim 5 or 6, wherein the multi-level residual module employs a three-scale pooling operation.
9. The method of processing a tumor image according to claim 1, wherein the two-dimensional delineation of the target organ and the tumor on the target organ based on the first data and the second data comprises:
acquiring a selection instruction of the target organ, wherein the selection instruction comprises a selection position of the target organ;
extracting a target original image corresponding to the selected position from the original image according to the selection instruction;
and according to the first data and the second data, performing two-dimensional delineation on the target organ and the tumor on the target organ on the original image.
10. The method for processing the tumor image according to claim 9, wherein the selection instruction further includes an image angle, and the extracting the target original image corresponding to the selection position from the original images according to the selection instruction includes:
and extracting a target original image which corresponds to the selected position and accords with the image angle from the original image according to the selection instruction.
11. An apparatus for processing a tumor image, comprising:
an acquisition module for acquiring an original image scanned for a target organ;
the first identification module is used for carrying out coarse identification on the original image so as to obtain the region data of the region where the target organ is located from the original image;
the second identification module is used for carrying out fine identification on the region data to acquire first data of the target organ and second data of a tumor on the target organ;
and the identification module is used for performing two-dimensional delineation and three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements a method of processing a tumor image as claimed in any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of processing a tumor image as claimed in any one of claims 1 to 10.
CN202010474294.7A 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium Active CN111640100B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010474294.7A CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium
PCT/CN2021/086139 WO2021238438A1 (en) 2020-05-29 2021-04-09 Tumor image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010474294.7A CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111640100A true CN111640100A (en) 2020-09-08
CN111640100B CN111640100B (en) 2023-12-12

Family

ID=72331191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010474294.7A Active CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111640100B (en)
WO (1) WO2021238438A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215769A (en) * 2020-10-09 2021-01-12 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method and device, ultrasonic equipment and storage medium
CN112767347A (en) * 2021-01-18 2021-05-07 上海商汤智能科技有限公司 Image registration method and device, electronic equipment and storage medium
WO2021238438A1 (en) * 2020-05-29 2021-12-02 京东方科技集团股份有限公司 Tumor image processing method and apparatus, electronic device, and storage medium
CN115147378A (en) * 2022-07-05 2022-10-04 哈尔滨医科大学 CT image analysis and extraction method
CN115300809A (en) * 2022-07-27 2022-11-08 北京清华长庚医院 Image processing method and device, computer equipment and storage medium
CN115861298A (en) * 2023-02-15 2023-03-28 浙江华诺康科技有限公司 Image processing method and device based on endoscopy visualization
CN117059235A (en) * 2023-08-17 2023-11-14 经智信息科技(山东)有限公司 Automatic rendering method and device for CT image
CN117152442A (en) * 2023-10-27 2023-12-01 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115482463B (en) * 2022-09-01 2023-05-05 北京低碳清洁能源研究院 Land coverage identification method and system for generating countermeasure network mining area
CN115908363B (en) * 2022-12-07 2023-09-22 赛维森(广州)医疗科技服务有限公司 Tumor cell statistics method, device, equipment and storage medium
CN115919464B (en) * 2023-03-02 2023-06-23 四川爱麓智能科技有限公司 Tumor positioning method, system, device and tumor development prediction method
CN116740768B (en) * 2023-08-11 2023-10-20 南京诺源医疗器械有限公司 Navigation visualization method, system, equipment and storage medium based on nasoscope

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080075343A1 (en) * 2006-03-23 2008-03-27 Matthias John Method for the positionally accurate display of regions of interest tissue
WO2008050332A2 (en) * 2006-10-25 2008-05-02 Siemens Computer Aided Diagnosis Ltd. Computer diagnosis of malignancies and false positives
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110889853A (en) * 2018-09-07 2020-03-17 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN111062955A (en) * 2020-03-18 2020-04-24 天津精诊医疗科技有限公司 Lung CT image data segmentation method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127444B (en) * 2019-12-26 2021-06-04 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network
CN111640100B (en) * 2020-05-29 2023-12-12 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080075343A1 (en) * 2006-03-23 2008-03-27 Matthias John Method for the positionally accurate display of regions of interest tissue
WO2008050332A2 (en) * 2006-10-25 2008-05-02 Siemens Computer Aided Diagnosis Ltd. Computer diagnosis of malignancies and false positives
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110889853A (en) * 2018-09-07 2020-03-17 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN111062955A (en) * 2020-03-18 2020-04-24 天津精诊医疗科技有限公司 Lung CT image data segmentation method and system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238438A1 (en) * 2020-05-29 2021-12-02 京东方科技集团股份有限公司 Tumor image processing method and apparatus, electronic device, and storage medium
CN111640100B (en) * 2020-05-29 2023-12-12 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium
CN112215769A (en) * 2020-10-09 2021-01-12 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method and device, ultrasonic equipment and storage medium
CN112767347A (en) * 2021-01-18 2021-05-07 上海商汤智能科技有限公司 Image registration method and device, electronic equipment and storage medium
CN115147378A (en) * 2022-07-05 2022-10-04 哈尔滨医科大学 CT image analysis and extraction method
CN115147378B (en) * 2022-07-05 2023-07-25 哈尔滨医科大学 CT image analysis and extraction method
CN115300809A (en) * 2022-07-27 2022-11-08 北京清华长庚医院 Image processing method and device, computer equipment and storage medium
CN115300809B (en) * 2022-07-27 2023-10-24 北京清华长庚医院 Image processing method and device, computer equipment and storage medium
CN115861298A (en) * 2023-02-15 2023-03-28 浙江华诺康科技有限公司 Image processing method and device based on endoscopy visualization
CN117059235A (en) * 2023-08-17 2023-11-14 经智信息科技(山东)有限公司 Automatic rendering method and device for CT image
CN117152442A (en) * 2023-10-27 2023-12-01 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium
CN117152442B (en) * 2023-10-27 2024-02-02 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
WO2021238438A1 (en) 2021-12-02
CN111640100B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN111640100B (en) Tumor image processing method and device, electronic equipment and storage medium
JP5814504B2 (en) Medical image automatic segmentation system, apparatus and processor using statistical model
US8369585B2 (en) Automatic classification of information in images
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
CN108428233B (en) Knowledge-based automatic image segmentation
US9129391B2 (en) Semi-automated preoperative resection planning
CN111340825B (en) Method and system for generating mediastinum lymph node segmentation model
CN110853743A (en) Medical image display method, information processing method, and storage medium
CN112529909A (en) Tumor image brain region segmentation method and system based on image completion
CN110751187A (en) Training method of abnormal area image generation network and related product
CN110738633B (en) Three-dimensional image processing method and related equipment for organism tissues
US9361701B2 (en) Method and system for binary and quasi-binary atlas-based auto-contouring of volume sets in medical images
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN111462270A (en) Reconstruction system and method based on novel coronavirus pneumonia CT detection
CN116797612B (en) Ultrasonic image segmentation method and device based on weak supervision depth activity contour model
CN113348485A (en) Abnormality detection method, abnormality detection program, abnormality detection device, server device, and information processing method
CN108876783B (en) Image fusion method and system, medical equipment and image fusion terminal
CN116309640A (en) Image automatic segmentation method based on multi-level multi-attention MLMA-UNet network
CN116797519A (en) Brain glioma segmentation and three-dimensional visualization model training method and system
CN113256754B (en) Stacking projection reconstruction method for segmented small-area tumor mass
Sha et al. A robust segmentation method based on improved U-Net
CN114820483A (en) Image detection method and device and computer equipment
CN114708283A (en) Image object segmentation method and device, electronic equipment and storage medium
CN113850816A (en) Cervical cancer MRI image segmentation device and method
CN111739004A (en) Image processing method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant