CN111640100B - Tumor image processing method and device, electronic equipment and storage medium - Google Patents

Tumor image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111640100B
CN111640100B CN202010474294.7A CN202010474294A CN111640100B CN 111640100 B CN111640100 B CN 111640100B CN 202010474294 A CN202010474294 A CN 202010474294A CN 111640100 B CN111640100 B CN 111640100B
Authority
CN
China
Prior art keywords
target organ
data
image
tumor
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010474294.7A
Other languages
Chinese (zh)
Other versions
CN111640100A (en
Inventor
王斯凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202010474294.7A priority Critical patent/CN111640100B/en
Publication of CN111640100A publication Critical patent/CN111640100A/en
Priority to PCT/CN2021/086139 priority patent/WO2021238438A1/en
Application granted granted Critical
Publication of CN111640100B publication Critical patent/CN111640100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a tumor image processing method and device, electronic equipment and storage medium, wherein the method comprises the following steps: acquiring an original image scanned for a target organ; performing rough recognition on the original image to obtain region data of a region where the target organ is located from the original image; carrying out precise identification on the region data to obtain first data of the target organ and second data of the tumor on the target organ; according to the first data and the second data, two-dimensional sketching and/or three-dimensional reconstruction are carried out on the target organ and the tumor on the target organ, so that a visual result of the target organ and the tumor on the target organ with higher precision can be obtained, and the position, the size, the shape and other information of the target organ and the tumor on the target organ can be more accurately determined by a doctor according to the identification image processed by the application, thereby being convenient for diagnosis and treatment.

Description

Tumor image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for processing a tumor image, an electronic device, and a storage medium.
Background
At present, a doctor mainly uses CT scanning to perform tumor examination, at least DICOM data with the size of more than 500M is generated by one CT scanning, more than 300 tomographic images of each DICOM data are needed to be checked by the doctor one by one, and more than 30 minutes are needed to be consumed. If three-dimensional reconstruction is required by the imaging physician after diagnosis, it usually takes more than 60 minutes to assist the clinician in performing the surgical planning. The doctor has high labor intensity and low efficiency. With the development of image processing technology, image processing is gradually applied to the processing of medical images, so as to reduce the time consumption of a doctor to check one by one.
Disclosure of Invention
The embodiment of the disclosure provides a tumor image processing method, which comprises the following steps: acquiring an original image of a target organ; performing rough recognition on the original image to obtain regional data of a region where a target organ is located from the original image; carrying out precise identification on the region data to obtain first data of the target organ and second data of the tumor on the target organ; and according to the first data and the second data, two-dimensional delineation and/or three-dimensional reconstruction are carried out on the target organ and the tumor on the target organ.
In some embodiments, the performing coarse recognition on the original image to obtain, from the original image, region data of a region where the target organ is located includes: inputting the original image into a U-shaped network structure with three layers of step connection to obtain a first image; performing three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinates of each position point; and determining the regional data of the target organ according to the space coordinates.
In some embodiments, the determining the region data of the target organ from the spatial coordinates includes: performing cluster analysis on the space coordinates to obtain gravity center positions corresponding to the position points in the first image; acquiring a first distance between each position point and the gravity center position; identifying a maximum value in the first distance and taking the maximum value as a shearing radius; and shearing the gravity center position according to the shearing radius, so that the sheared area is used as area data of the target organ.
In some embodiments, the identifying a maximum value of the first distance and taking the maximum value as a shearing radius comprises: and obtaining the redundancy amount of the shearing radius, and taking the sum of the maximum value and the redundancy amount as the shearing radius.
In some embodiments, the performing fine recognition on the region data to obtain the first data of the target organ and the second data of the tumor on the target organ includes: acquiring an area image corresponding to the area data; inputting the region image into a 3DU network structure with a sparse connection module and a multi-level residual error module; performing fine recognition on the region image by utilizing the 3DU type network structure to obtain a second image subjected to fine recognition by the 3DU type network structure; and analyzing the second image to acquire first data of the target organ and second data of the tumor on the target organ.
In some embodiments, the acquiring the second image that is precisely identified through the 3DU type network structure includes: up-sampling the region data by utilizing the 3DU network structure to obtain a first feature map; inputting the first feature map into the sparse connection module to obtain a second feature map; inputting the second feature map into the multi-level residual error module to obtain a third feature map; and upsampling the third characteristic diagram to acquire the second image.
In some embodiments, the sparse connection module comprises four cascading branches.
In some embodiments, the multi-level residual module employs a three-scale pooling operation.
In some embodiments, the two-dimensional delineation of the target organ and the tumor on the target organ based on the first data and the second data comprises: acquiring a selection instruction of the target organ, wherein the selection instruction comprises a selection position of the target organ; extracting a target original image corresponding to the selected position from the original image according to the selection instruction; and according to the first data and the second data, two-dimensional delineation is carried out on the target organ and the tumor on the target organ.
In some embodiments, the selecting instruction further includes an image angle, and the extracting, according to the selecting instruction, the target original image corresponding to the selected position from the original images includes: and extracting a target original image which corresponds to the selected position and accords with the image angle from the original image according to the selection instruction.
According to the tumor image processing method provided by the application, the original image is subjected to rough first and then fine second identification, so that the target organ and the tumor on the target organ with higher precision can be obtained, and the target organ and the tumor on the target organ are subjected to two-dimensional sketching and/or three-dimensional reconstruction, so that the doctor can more accurately determine the position, the size, the shape and other information of the target organ and the tumor on the target organ according to the identification image processed by the doctor, and diagnosis and treatment are facilitated.
The embodiment of the disclosure provides a tumor image processing device, which comprises: the acquisition module is used for acquiring an original image shot for a target organ; the first identification module is used for carrying out rough identification on the original image so as to obtain area data of an area where the target organ is located from the original image; the second identification module is used for carrying out fine identification on the region data and acquiring first data of the target organ and second data of the tumor on the target organ; and the identification module is used for carrying out two-dimensional sketching and/or three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
The embodiment of the disclosure provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the tumor image processing method when executing the program.
The embodiment of the disclosure proposes a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the above-mentioned method of processing a tumor image.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flowchart of a method for processing tumor images according to an embodiment of the present application;
FIG. 2 is a flowchart of another method for processing tumor images according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a simplified Unet model according to an embodiment of the present application;
FIG. 4 is a flowchart of another method for processing tumor images according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an effect of the simplified Unet model according to the embodiment of the present application;
FIG. 6 is a flowchart of another method for processing tumor images according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a 3DU network structure with a sparse connection module and a multi-level residual module according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for processing a tumor image according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a sparse connection module according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a multi-level residual module according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a training sample set according to an embodiment of the present application;
FIG. 12 is a flowchart of a method for processing a tumor image according to an embodiment of the present application;
FIG. 13 is a schematic diagram showing two-dimensional delineation of a target organ and a tumor thereon in accordance with an embodiment of the present application;
FIG. 14 is a schematic diagram of two-dimensional delineation and three-dimensional reconstruction of a target organ and its tumor thereon in accordance with another embodiment of the present application;
fig. 15 is a block diagram of a tumor image processing apparatus according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The following describes a tumor image processing method and apparatus, an electronic device, and a storage medium according to embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method for processing tumor images according to an embodiment of the present application. As shown in fig. 1, the method for processing a tumor image according to an embodiment of the present application includes the following steps:
s101: an original image of a target organ is acquired.
Wherein the original image is, for example, a CT (Computed Tomography, electronic computer tomography) image taken of a target organ of the patient, and a CT scan generates at least DICOM (Digital Imaging and Communications in Medicine, digital imaging and communication in medicine) data of a size of 500M or more, each of which is about 300 or more tomographic images.
S102: and performing rough recognition on the original image to obtain the region data of the region where the target organ is located from the original image.
S103: and carrying out precise identification on the region data, and acquiring first data of the target organ and second data of the tumor on the target organ.
The first data may include information such as a position and a size of the target organ, and the second data may include information such as a position, a size, and a shape of the tumor.
S104: and carrying out two-dimensional delineation and/or three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
Therefore, the processing method of the tumor image can acquire the target organ and the tumor on the target organ with higher precision and perform two-dimensional sketching and/or three-dimensional reconstruction on the target organ and the tumor on the target organ by carrying out rough first and then fine second identification on the original image, so that a doctor can more accurately determine the position, the size, the shape and other information of the target organ and the tumor on the target organ according to the identification image processed by the processing method of the tumor image, and diagnosis and treatment are facilitated.
According to another embodiment of the present application, as shown in fig. 2, the rough recognition is performed on the original image to obtain the region data of the region where the target organ is located from the original image, including:
s201: the original image is input into a U-shaped network structure with three layers of step connection to obtain a first image.
It should be noted that, in the embodiment of the present application, the U-shaped structure network is, for example, a simplified uiet model, where the uiet model has the characteristics of simple structure and compact combination of deep and shallow semantics, and the present application optimizes the uiet model, as shown in fig. 3, by removing the fourth layer connection in the structure of the conventional uiet model and adjusting the connection to three layers of step connection, the number of parameters of the model can be reduced by 1/2, and the running efficiency can be effectively improved.
It should be noted that, because the whole image is identified by using the CT scan image, a large error may be generated, especially, for example, kidney tumor is identified, a large amount of background space is left outside the double kidneys, which not only increases the calculation amount of image processing, but also affects the accuracy of the identification result. Therefore, the application further processes the image acquired through the U-shaped network structure to determine the area data of the area where the target organ is located, namely, the input data of the follow-up precise identification is the data which only contains the area where the target organ is located and is provided with the excision irrelevant information, and the data processing amount of the precise identification is effectively reduced. In the case of kidneys, the input data for the precise recognition may be area data including only the left and right kidneys.
S202: and carrying out three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinates of each position point.
S203: regional data of the target organ is determined based on the spatial coordinates.
Specifically, as shown in fig. 4, determining the region data of the target organ according to the spatial coordinates includes:
s301: and carrying out cluster analysis on the space coordinates to obtain the gravity center positions corresponding to the position points in the first image.
S302: a first distance between each location point and the center of gravity location is obtained.
S303: the maximum value in the first distance is identified and taken as the shearing radius.
S304: and shearing the heavy center position according to the shearing radius, so that the sheared area is used as area data of the target organ.
It should be understood that when the target organ is a kidney, since there are two kidneys, the barycentric positions may be two, that is, two barycentric positions may be found according to the clustering result after the clustering analysis, so that the region data of the two target organs are cut out according to the clustering result, wherein the clustering effect may be as shown in fig. 5.
Further, to avoid shearing errors, the elastic scale, i.e. the redundancy of the shearing radius,wherein the redundancy amount may be L 2 =L CT *1, wherein L CT Can be the size of the original image, the shearing size l=l 1 +L 2 Wherein L is 1 The distance from this point may be the location of the centroid.
It should be understood that, because an example of the target organ portion, such as the kidney portion, included in the CT data only accounts for 10% of the whole data, adding a large amount of irrelevant information can cause the model to be fitted excessively, so that the difficulty in identifying the kidney and the tumor is increased, the purity of the data can be improved by the coarse identification and excision of the irrelevant information provided by the application, and the trained model has better identification effect.
Therefore, the application can extract the regional data of the target organ through rough recognition, so as to reduce the data processing amount for the follow-up fine recognition process and effectively improve the accuracy and the operation speed of image processing.
According to yet another embodiment of the present application, as shown in fig. 6, the fine recognition of the region data to generate a target organ image containing tumor data includes:
s401: and acquiring an area image corresponding to the area data.
The area data may be a data block containing lesion and tumor information obtained by coarse recognition, and since the 3DU type network is an image processing network, the data block needs to be imaged for fine recognition.
S402: and inputting the region image into a 3DU type network structure with a sparse connection module and a multi-stage residual error module.
It should be noted that, the 3DU type network structure with the sparse connection module (S module) and the multi-level residual module (P module) proposed by the present application may be expressed as an SP-3DUnet model, that is, the sparse connection module (S module) and the multi-level residual module (P module) are added to the 3DUnet model, and specifically, the sparse connection module (S module) and the multi-level residual module (P module) are added to the bottommost layer of the 3DUnet model, that is, after the downsampling and before the upsampling of the 3DUnet model.
Furthermore, the application optimizes the 3DUnet model, namely, each convolution layer removes one convolution operation on the basis of the original model to reduce 1/2 parameter and reduce model volume, but the operation correspondingly reduces the extraction capacity of the model to the deep semantics of the image, so that the application further adds a sparse connection module (S module) and a multi-level residual module (P module) at the bottom layer of the optimized 3DUnet model, as shown in fig. 7, so as to resist the loss of semantic expression capacity, and the operation can improve the algorithm recognition accuracy and the operation efficiency on the premise of ensuring that only 1/5 parameter quantity is increased.
S403: and carrying out fine recognition on the regional image by utilizing the 3DU type network structure, and obtaining a second image subjected to fine recognition by the 3DU type network structure.
As a specific embodiment, as shown in fig. 8 to 10, step S403 of obtaining a second image precisely identified by the 3DU network structure includes:
s4031: and up-sampling the region data by using a 3DU type network structure to obtain a first feature map.
S4032: and inputting the first characteristic diagram into a sparse connection module to obtain a second characteristic diagram.
It should be noted that the present application proposes to encode advanced semantic feature mapping based on sparse connections based on hole convolutions, where the hole convolutions are stacked in a cascade. Specifically, in the embodiment of the present application, the sparse connection module is divided into four cascade branches.
As a specific example, as shown in fig. 9, due to the requirement of the subsequent three-dimensional model for output data and the actual GPU (Graphics Processing Unit, graphics processor) hardware performance limitations may be employed with two kinds of hole convolutions, 1 x 3, 3 x 3, 7 x 7 and 9 x 3 for each finger.
It should be appreciated that when using a sparse connection module, convolution of large receptive fields may extract and generate more abstract features for large targets, while convolution of small receptive fields may extract and generate more abstract features for small targets. According to the application, by combining the cavity convolutions with different expansion rates, the sparse connection module can extract the characteristics with various size targets, establish a sparse connection mode of the bottom key semantic characteristics, and offset semantic information description loss caused by convolution operation reduction.
S4033: and inputting the second characteristic diagram into a multi-level residual error module to obtain a third characteristic diagram.
It should be noted that, since the tumor sizes of the target organs are different, the tumor sizes of the late stage patients may exceed the target organ volume by 2/3, and when the tumor volumes are larger or smaller than the sizes covered by the current data set, the algorithm performance may be reduced, so the application proposes to use a multi-stage residual error module to enhance the recognition generalization capability of the algorithm to the multi-size targets. The multistage residual error module adopts pooling operation with different scales to extract characteristic information with different scales, so that the recognition capability of targets with different sizes is improved.
As a specific embodiment, as shown in fig. 10, three-scale pooling operations are used, and three branches output feature fields of three scales. To reduce the dimension of the weights and the computational cost, a 1 x 1 convolution can also be used after each pooled branch, which can reduce the size of the feature domain to 1/N of the original size, where N represents the number of channels in the original feature domain.
S4034: the third feature map is upsampled to obtain a second image.
Wherein the same size feature as the original feature map is obtained by bilinear interpolation for the third feature map reduced to 1/N of the original size.
S404: the second image is parsed to obtain first data comprising the target organ and second data of a tumor on the target organ.
That is, an image after the image is finely recognized by the 3DU type network structure having the sparse connection module and the multi-level residual module is a feature image, and it is necessary to perform parsing so as to acquire first data including the target organ and second data of the tumor on the target organ. For example, the position of the target organ may be marked as 1, the position of the tumor on the target organ may be marked as 2, and the other positions may be marked as 0, so that the position set of the target organ is the first data of the target organ, the position and the size of the target organ can be expressed, and similarly, the position set of the tumor on the target organ is the second data of the tumor on the target organ.
It should be noted that, the simplified U-shaped network structure and the 3 DU-shaped network structure with the sparse connection module and the multi-stage residual module used for the above-mentioned coarse recognition and fine recognition need to perform deep learning training, so that the first data of the target organ and the second data of the tumor on the target organ can be accurately recognized during use.
Furthermore, in the training process, a plurality of original images with target organs and tumor positions marked manually can be input as a training sample set, wherein in order to further enable the training result to be more accurate, the training sample set can be expanded in a mode of reversing, translating, rotating, deforming and the like.
The application adopts a Dice coefficient loss function, wherein the Dice coefficient is expressed by adopting the following formula:
wherein N is a pixel number label, p (k, i) ∈ [0,1 ]]And g (k, i) E {0,1} respectively represent the prediction probability and the true label of class k, p (k, i) is the pre-stored probability of each pixel point in the analyzed target organ image, k is the class, and sigma k ωk=1 is a weight, and in the embodiment of the present application, it may be set
The loss function using the Dice coefficient may be: lloss = Ldice + Lreg where Lreg represents a regularization loss term (also referred to as weight decay) to avoid overfitting. .
It should also be understood that when deep learning is required for both the simplified U-type network structure and the 3 DU-type network structure with the sparse connection module and the multi-level residual module, the accuracy of image recognition can be verified through the manual inspection of a doctor, if a recognition error occurs, the correction is performed through the brush function, and the correction result replaces the original recognition result, for example, the correction result is automatically returned to the cloud training database to update the training sample set, so as to retrain the model, as shown in fig. 11.
As another possible embodiment, as shown in fig. 12, two-dimensional delineation of the target organ and the tumor on the target organ based on the first data and the second data, includes:
s501: and acquiring a selection instruction of the target organ.
Wherein the selection instruction comprises a selected location of the target organ.
S502: and extracting a target original image corresponding to the selected position from the original images according to the selection instruction.
In some embodiments, the selection instructions may further include an image angle, whereby it is desirable to extract a target original image from the original image that corresponds to the selected location and that matches the image angle.
S503: and according to the first data and the second data, two-dimensional delineation is carried out on the target organ and the tumor on the target organ on the original image.
That is, after the original image of the target organ is identified, the target organ and the tumor on the target organ can be two-dimensionally delineated on the original image according to the first data and the second data, so that a doctor and a patient can clearly know the lesion condition. Meanwhile, doctors can be helped to concentrate on identifying pictures containing kidneys when carrying out medical diagnosis according to CT images, the diagnosis and treatment efficiency is effectively improved, the purpose of accurate diagnosis and treatment is achieved, and missed diagnosis can be prevented. Furthermore, the application can effectively improve the visualization degree during doctor-patient communication by carrying out three-dimensional reconstruction on the target organ and the tumor thereon.
Specifically, a selection instruction of a target organ can be obtained, a position and an angle selected by a doctor and/or a patient are extracted from the selection instruction, a tomographic image corresponding to the position and the angle is extracted from an original image according to the position and the angle, then whether each position point in the image belongs to first data and/or second data is judged, if so, identification is carried out according to the first data and/or the second data, and if not, no operation is carried out.
For example, as shown in fig. 13, the left image is a tomographic image of the kidney scanned in a vertical plane of the human body, specifically, the image is a layer 286 corresponding image of 520 tomographic images, and the right image is an image of the kidney and kidney tumor identified in the left image. Further, the image angle may further include a side view angle and a back view angle in a human body upright state, as shown in fig. 14, wherein an image in an upper left corner in fig. 14 is a tomographic image of the side view angle in the human body upright state, an image in an upper right corner is a tomographic image of a top view angle in the human body upright state, an image in a lower right corner is a tomographic image of the back view angle in the human body upright state, and an image in a lower left corner is a three-dimensional model perspective view of the target organ and a tumor thereon.
Or after the first data and the second data are acquired through fine recognition, three-dimensional modeling is conducted on the target organ and tumors on the target organ according to the first data and the second data, so that a three-dimensional model of the target organ and the tumors on the target organ is acquired, as shown in fig. 14, and then corresponding identification data are extracted from the three-dimensional model according to a selection instruction, and two-dimensional sketching is conducted on an original image.
It should be understood that, as shown in the lower left corner of fig. 14, the embodiment of the present application may further reconstruct the target organ and the tumor on the target organ in three dimensions according to the first data and the second data, and directly display the three-dimensional reconstructed stereo image.
It should also be appreciated that the multiple images in fig. 13-14 may be combined when displayed via the display terminal, so as to facilitate the doctor and/or the patient to pass through the correspondence between the multiple images displayed simultaneously, that is, the form and size of the same target organ location and/or the same tumor location under different angles, and so on.
Further, according to the three-dimensional reconstruction model of the identified target organ and tumor area, a doctor can accurately acquire the position and the form of the tumor at the target organ through the model result, so that the doctor can conveniently conduct medical planning, such as operation planning, radiotherapy planning, chemotherapy planning and the like, according to the three-dimensional reconstruction model.
For example, in the operation planning process, according to the tumor position determined by the method provided by the application, doctors can be helped to find accurate positioning, the operation time is reduced, the operation quality is improved, and the dislocation of the operation incision can be avoided due to accurate positioning, so that the size of the operation incision is reduced, the wound preset speed of a patient is accelerated, and the pain of the patient is reduced; during radiotherapy planning, the tumor size and position can be determined by the method provided by the application, so that a doctor can be helped to plan the intensity of radiotherapy rays, and the influence of radiotherapy on a normal part is reduced; in the process of chemotherapy planning, the information such as the tumor position, the tumor size and the tumor shape determined by the method provided by the application helps doctors to plan the dosage of drugs, and reduces the influence on normal cells of patients, thereby relieving the treatment pain of the patients.
In summary, the method for processing tumor images provided by the application can acquire the target organ with higher precision and the tumor on the target organ by carrying out rough first and then fine second identification on the original image, and carry out two-dimensional sketching and/or three-dimensional reconstruction on the target organ and the tumor on the target organ, so that a doctor can more accurately determine the position, the size, the shape and other information of the target organ according to the target organ images processed by the method, and diagnosis and treatment are facilitated.
In order to achieve the above embodiment, the present application further provides a tumor image processing device.
Fig. 15 is a block diagram of a tumor image processing apparatus according to an embodiment of the present application. As shown in fig. 15, the tumor image processing apparatus 10 includes: the device comprises an acquisition module 11, a first identification module 12, a second identification module 13 and an identification module 14.
Wherein the acquisition module 11 is configured to acquire an original image captured for a target organ.
The first recognition module 12 is configured to perform coarse recognition on the original image, so as to obtain, from the original image, region data of a region where the target organ is located.
And the second identification module 13 is used for carrying out fine identification on the region data and acquiring the first data of the target organ and the second data of the tumor on the target organ.
The identification module 14 is configured to two-dimensionally delineate and three-dimensionally reconstruct the target organ and the tumor on the target organ according to the first data and the second data.
Further, the first identification module 12 is specifically configured to: inputting the original image into a U-shaped network structure with three layers of step connection to obtain a first image; performing three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinates of each position point; and determining the regional data of the target organ according to the space coordinates.
Further, the first identification module 12 is specifically configured to: performing cluster analysis on the space coordinates to obtain gravity center positions corresponding to the position points in the first image; acquiring a first distance between each position point and the gravity center position; identifying a maximum value in the first distance and taking the maximum value as a shearing radius; and shearing the gravity center position according to the shearing radius, so that the sheared area is used as area data of the target organ.
Further, the first identification module 12 is specifically configured to: and obtaining the redundancy amount of the shearing radius, and taking the sum of the maximum value and the redundancy amount as the shearing radius.
Further, the second identifying module 13 is specifically configured to: acquiring an area image corresponding to the area data; inputting the region image into a 3DU network structure with a sparse connection module and a multi-level residual error module; performing fine recognition on the region image by utilizing the 3DU type network structure to obtain a second image subjected to fine recognition by the 3DU type network structure; and analyzing the second image to generate the target organ image.
Further, the second identifying module 13 is specifically configured to: up-sampling the region data by utilizing the 3DU network structure to obtain a first feature map; inputting the first feature map into the sparse connection module to obtain a second feature map; inputting the second feature map into the multi-level residual error module to obtain a third feature map; and upsampling the third characteristic diagram to acquire the second image.
Further, the sparse connection module comprises four cascading branches.
Further, the multi-level residual error module adopts three-scale pooling operation.
Further, the identification module 14 is further configured to: acquiring a selection instruction of the target organ, wherein the selection instruction comprises a selection position of the target organ; extracting a target original image corresponding to the selected position from the original image according to the selection instruction; and according to the first data and the second data, two-dimensional sketching and three-dimensional reconstruction are carried out on the target organ and the tumor on the target organ.
Further, the identification module 14 is further configured to: and extracting a target original image which corresponds to the selected position and accords with the image angle from the original image according to the selection instruction.
It should be noted that the foregoing explanation of the embodiment of the method for processing a tumor image is also applicable to the apparatus for processing a tumor image of this embodiment, and will not be repeated here.
Based on the above embodiment, the embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor implements the foregoing method for processing a tumor image when executing the program.
In order to achieve the above-described embodiments, the present application also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the aforementioned method of processing a tumor image.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (11)

1. A method for processing a tumor image, comprising the steps of:
acquiring an original image of a target organ;
performing rough recognition on the original image to obtain region data of a region where the target organ is located from the original image;
carrying out precise identification on the region data to obtain first data of the target organ and second data of the tumor on the target organ;
according to the first data and the second data, two-dimensional sketching and/or three-dimensional reconstruction are carried out on the target organ and the tumor on the target organ;
the performing coarse recognition on the original image to obtain region data of a region where the target organ is located from the original image, including:
inputting the original image into a U-shaped network structure with three layers of step connection to obtain a first image;
performing three-dimensional coordinate mapping on each position point in the first image to obtain the space coordinates of each position point;
determining regional data of the target organ according to the space coordinates;
the determining the region data of the target organ according to the space coordinates comprises the following steps:
performing cluster analysis on the space coordinates to obtain gravity center positions corresponding to the position points in the first image;
acquiring a first distance between each position point and the gravity center position;
identifying a maximum value in the first distance and taking the maximum value as a shearing radius;
and shearing the gravity center position according to the shearing radius, so that the sheared area is used as area data of the target organ.
2. The method of processing a tumor image according to claim 1, wherein the identifying a maximum value of the first distances and taking the maximum value as a shearing radius comprises:
and obtaining the redundancy amount of the shearing radius, and taking the sum of the maximum value and the redundancy amount as the shearing radius.
3. The method of claim 1, wherein the performing fine recognition on the region data to obtain the first data of the target organ and the second data of the tumor on the target organ comprises:
acquiring an area image corresponding to the area data;
inputting the region image into a 3DU network structure with a sparse connection module and a multi-level residual error module;
performing fine recognition on the region image by utilizing the 3DU type network structure to obtain a second image subjected to fine recognition by the 3DU type network structure;
and analyzing the second image to acquire first data containing the target organ and second data of the tumor on the target organ.
4. The method according to claim 3, wherein the performing fine recognition on the region image using the 3DU type network structure, obtaining a second image subjected to fine recognition by the 3DU type network structure, comprises:
up-sampling the region data by utilizing the 3DU network structure to obtain a first feature map;
inputting the first feature map into the sparse connection module to obtain a second feature map;
inputting the second feature map into the multi-level residual error module to obtain a third feature map;
and upsampling the third characteristic diagram to acquire the second image.
5. The method of claim 3 or 4, wherein the sparse connection module comprises four cascading branches.
6. The method of claim 3 or 4, wherein the multi-level residual module employs three-scale pooling operations.
7. The method of claim 1, wherein the two-dimensionally delineating the target organ and the tumor on the target organ based on the first data and the second data comprises:
acquiring a selection instruction of the target organ, wherein the selection instruction comprises a selection position of the target organ;
extracting a target original image corresponding to the selected position from the original image according to the selection instruction;
and according to the first data and the second data, two-dimensional delineation is carried out on the target organ and the tumor on the target organ on the original image.
8. The method according to claim 7, wherein the selection instruction further includes an image angle, and the extracting, from the original images, the target original image corresponding to the selected position according to the selection instruction includes:
and extracting a target original image which corresponds to the selected position and accords with the image angle from the original image according to the selection instruction.
9. A tumor image processing apparatus for implementing the tumor image processing method according to any one of claims 1 to 8, the processing apparatus comprising:
an acquisition module for acquiring an original image scanned for a target organ;
the first identification module is used for carrying out rough identification on the original image so as to obtain area data of an area where the target organ is located from the original image;
the second identification module is used for carrying out fine identification on the region data and acquiring first data of the target organ and second data of the tumor on the target organ;
and the identification module is used for carrying out two-dimensional sketching and three-dimensional reconstruction on the target organ and the tumor on the target organ according to the first data and the second data.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of processing a tumor image according to any one of claims 1-8 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a method for processing a tumor image according to any one of claims 1-8.
CN202010474294.7A 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium Active CN111640100B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010474294.7A CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium
PCT/CN2021/086139 WO2021238438A1 (en) 2020-05-29 2021-04-09 Tumor image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010474294.7A CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111640100A CN111640100A (en) 2020-09-08
CN111640100B true CN111640100B (en) 2023-12-12

Family

ID=72331191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010474294.7A Active CN111640100B (en) 2020-05-29 2020-05-29 Tumor image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111640100B (en)
WO (1) WO2021238438A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640100B (en) * 2020-05-29 2023-12-12 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium
CN112215769B (en) * 2020-10-09 2024-06-28 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method and device, ultrasonic equipment and storage medium
CN112767347A (en) * 2021-01-18 2021-05-07 上海商汤智能科技有限公司 Image registration method and device, electronic equipment and storage medium
CN115147378B (en) * 2022-07-05 2023-07-25 哈尔滨医科大学 CT image analysis and extraction method
CN115100185A (en) * 2022-07-22 2022-09-23 深圳市联影高端医疗装备创新研究院 Image processing method, image processing device, computer equipment and storage medium
CN115300809B (en) * 2022-07-27 2023-10-24 北京清华长庚医院 Image processing method and device, computer equipment and storage medium
CN115482463B (en) * 2022-09-01 2023-05-05 北京低碳清洁能源研究院 Land coverage identification method and system for generating countermeasure network mining area
CN115908363B (en) * 2022-12-07 2023-09-22 赛维森(广州)医疗科技服务有限公司 Tumor cell statistics method, device, equipment and storage medium
CN115861298B (en) * 2023-02-15 2023-05-23 浙江华诺康科技有限公司 Image processing method and device based on endoscopic visualization
CN115919464B (en) * 2023-03-02 2023-06-23 四川爱麓智能科技有限公司 Tumor positioning method, system, device and tumor development prediction method
CN116740768B (en) * 2023-08-11 2023-10-20 南京诺源医疗器械有限公司 Navigation visualization method, system, equipment and storage medium based on nasoscope
CN117059235A (en) * 2023-08-17 2023-11-14 经智信息科技(山东)有限公司 Automatic rendering method and device for CT image
CN117152442B (en) * 2023-10-27 2024-02-02 吉林大学 Automatic image target area sketching method and device, electronic equipment and readable storage medium
CN117838306B (en) * 2024-02-01 2024-06-21 南京诺源医疗器械有限公司 Target image processing method and system based on imager

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008050332A2 (en) * 2006-10-25 2008-05-02 Siemens Computer Aided Diagnosis Ltd. Computer diagnosis of malignancies and false positives
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110889853A (en) * 2018-09-07 2020-03-17 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN111062955A (en) * 2020-03-18 2020-04-24 天津精诊医疗科技有限公司 Lung CT image data segmentation method and system
CN111640100A (en) * 2020-05-29 2020-09-08 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006013476B4 (en) * 2006-03-23 2012-11-15 Siemens Ag Method for positionally accurate representation of tissue regions of interest
CN111127444B (en) * 2019-12-26 2021-06-04 广州柏视医疗科技有限公司 Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008050332A2 (en) * 2006-10-25 2008-05-02 Siemens Computer Aided Diagnosis Ltd. Computer diagnosis of malignancies and false positives
CN110310287A (en) * 2018-03-22 2019-10-08 北京连心医疗科技有限公司 It is neural network based to jeopardize the automatic delineation method of organ, equipment and storage medium
CN110889853A (en) * 2018-09-07 2020-03-17 天津大学 Tumor segmentation method based on residual error-attention deep neural network
CN109598728A (en) * 2018-11-30 2019-04-09 腾讯科技(深圳)有限公司 Image partition method, device, diagnostic system and storage medium
CN111062955A (en) * 2020-03-18 2020-04-24 天津精诊医疗科技有限公司 Lung CT image data segmentation method and system
CN111640100A (en) * 2020-05-29 2020-09-08 京东方科技集团股份有限公司 Tumor image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111640100A (en) 2020-09-08
WO2021238438A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
CN111640100B (en) Tumor image processing method and device, electronic equipment and storage medium
CN108022238B (en) Method, computer storage medium, and system for detecting object in 3D image
RU2720070C1 (en) Systems and methods of segmenting image using convolution neural network
RU2720440C1 (en) Image segmentation method using neural network
CN111008984B (en) Automatic contour line drawing method for normal organ in medical image
JP2023550844A (en) Liver CT automatic segmentation method based on deep shape learning
CN107545584A (en) The method, apparatus and its system of area-of-interest are positioned in medical image
US9129391B2 (en) Semi-automated preoperative resection planning
CN112529909A (en) Tumor image brain region segmentation method and system based on image completion
CN112348818B (en) Image segmentation method, device, equipment and storage medium
CN108876783B (en) Image fusion method and system, medical equipment and image fusion terminal
CN116797612B (en) Ultrasonic image segmentation method and device based on weak supervision depth activity contour model
CN111353524A (en) System and method for locating patient features
CN110751187A (en) Training method of abnormal area image generation network and related product
CN116309640A (en) Image automatic segmentation method based on multi-level multi-attention MLMA-UNet network
CN111462270A (en) Reconstruction system and method based on novel coronavirus pneumonia CT detection
CN110533120A (en) Image classification method, device, terminal and the storage medium of organ tubercle
CN113348485A (en) Abnormality detection method, abnormality detection program, abnormality detection device, server device, and information processing method
CN113256754B (en) Stacking projection reconstruction method for segmented small-area tumor mass
CN116797519A (en) Brain glioma segmentation and three-dimensional visualization model training method and system
Xu et al. Automatic segmentation of orbital wall from CT images via a thin wall region supervision-based multi-scale feature search network
CN112541909B (en) Lung nodule detection method and system based on three-dimensional neural network of slice perception
Rahmawati et al. Modification Rules for Improving Marching Cubes Algorithm to Represent 3D Point Cloud Curve Images.
CN111739004A (en) Image processing method, apparatus and storage medium
CN114445421B (en) Identification and segmentation method, device and system for nasopharyngeal carcinoma lymph node region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant