CN115966309A - Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device - Google Patents

Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device Download PDF

Info

Publication number
CN115966309A
CN115966309A CN202310260320.XA CN202310260320A CN115966309A CN 115966309 A CN115966309 A CN 115966309A CN 202310260320 A CN202310260320 A CN 202310260320A CN 115966309 A CN115966309 A CN 115966309A
Authority
CN
China
Prior art keywords
image
target
target image
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310260320.XA
Other languages
Chinese (zh)
Inventor
陈日清
高珊
苏晨晖
徐宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Kunbo Biotechnology Co Ltd
Original Assignee
Hangzhou Kunbo Biotechnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kunbo Biotechnology Co Ltd filed Critical Hangzhou Kunbo Biotechnology Co Ltd
Priority to CN202310260320.XA priority Critical patent/CN115966309A/en
Publication of CN115966309A publication Critical patent/CN115966309A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses a recurrence position prediction method, a recurrence position prediction device, a nonvolatile storage medium and electronic equipment. Wherein, the method comprises the following steps: acquiring a first target image and a second target image, wherein the first target image is obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissue after the operation is finished; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; and analyzing the first target image and the third target image by adopting the target neural network model to obtain a predicted image output by the target neural network model. The technical problem that whether the patient relapses or not cannot be accurately determined due to the fact that whether the patient relapses or not is determined by adopting a manual film reading mode in the related technology is solved.

Description

Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device
Technical Field
The present application relates to the field of image processing, and in particular, to a recurrence location prediction method, apparatus, non-volatile storage medium, and electronic device.
Background
At present, in the treatment process of cancers such as lung cancer, the application of minimally invasive surgery is more and more extensive. Post-operative follow-up of minimally invasive surgery is very important and requires the patient to be reviewed on time to determine whether the patient will relapse. However, in the related art, when determining whether a patient has relapsed, a manual radiographing method is usually adopted, and a doctor looks at a scanned image of the patient to determine whether the patient has relapsed. However, the method is too dependent on the diagnosis level of the doctor, has strong subjectivity, brings huge workload to the doctor, and is easy to make mistakes.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a recurrence position prediction method and device, a nonvolatile storage medium and electronic equipment, which are used for at least solving the technical problem that whether a patient has recurrence cannot be accurately determined due to the fact that whether the patient has recurrence is determined by adopting a manual film reading mode in the related technology.
According to an aspect of an embodiment of the present application, there is provided a recurrence location prediction method, including: acquiring a first target image and a second target image, wherein the first target image is obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissue after the operation is finished; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; and analyzing the first target image and the third target image by adopting the target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a tag which is used for determining a lesion recurrence region of the target physiological tissue in the predicted image.
Optionally, the step of obtaining a third target image according to the first target image and the second target image includes: determining position information of a first anatomical point in a first target image in the first target image; and registering the second target image with the first target image according to the position information of the first anatomical point to obtain a third target image, wherein the position information of the second anatomical point in the third target image is the same as the position information of the first anatomical point in the first target image, and the first anatomical point and the second anatomical point are the same anatomical point on the target physiological tissue.
Optionally, the tags include a first type of tag and a second type of tag, wherein the first type of tag is used for marking an ablation-completed region in the predictive image, and the second type of tag is used for marking an non-ablated region in the predictive image, and the non-ablated region is a recurrence region.
Optionally, before the step of inputting the first target image and the third target image into the target neural network model, the recurrence location prediction method further includes: acquiring a training data set, wherein the training data set comprises a plurality of groups of training samples, each group of training samples in the plurality of groups of training samples comprises a first sample image, a second sample image and a reference image of the same case, the first sample image is an image obtained by scanning a target physiological tissue of a patient before an operation, the second sample image is a post-operation scanning image of the target physiological tissue registered with the first sample image, and the reference image is an image obtained by adding a label in the second sample image; for each training sample in the multiple groups of training samples, inputting a first sample image and a second sample image into a neural network model to be trained, and acquiring a prediction sample image output by the neural network model to be trained, wherein the prediction sample image comprises a label; and adjusting the neural network model to be trained according to the prediction sample image and the reference image, thereby obtaining the target neural network model.
Optionally, the step of adjusting the neural network model to be trained according to the prediction sample image and the reference image includes: comparing the predicted sample image with the reference image to obtain a comparison result, and adjusting parameters of the neural network model to be trained according to the comparison result; or, the predicted sample image and the reference image are displayed to the second target object, and the parameters of the neural network model to be trained are adjusted according to the adjustment operation of the second target object.
Optionally, the comparison result includes a similarity between the prediction sample image and the reference image, wherein the step of adjusting parameters of the neural network model to be trained according to the comparison result includes: and under the condition that the similarity is lower than a preset similarity threshold, adjusting parameters of the neural network model to be trained, regenerating a prediction sample image after each adjustment, and comparing the regenerated prediction sample image with the reference image until the similarity in the comparison result is not lower than the preset similarity threshold.
Optionally, the step of obtaining a training data set comprises: acquiring postoperative scanning images which are obtained after scanning target physiological tissues of a target patient and correspond to different time points within a postoperative preset time period; registering each postoperative scanned image in the plurality of postoperative scanned images with a first sample image of a target patient to obtain a plurality of second sample images; and adding labels to the second sample images to obtain a plurality of reference images.
According to another aspect of the embodiments of the present application, there is also provided a recurrence location prediction apparatus, including: the system comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring a first target image and a second target image, the first target image is obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissue after the operation is finished; the first processing module is used for obtaining a third target image according to the first target image and the second target image, wherein a first anatomical point in the first target image is matched with a second anatomical point in the third target image; and the second processing module is used for processing the first target image and the second target image by adopting the target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a tag which is used for determining a lesion recurrence region of the target physiological tissue in the predicted image.
Optionally, the first processing module is further configured to: determining position information of a first anatomical point in a first target image in the first target image; and registering the second target image with the first target image according to the position information of the first anatomical point to obtain a third target image, wherein the position information of the second anatomical point in the third target image is the same as the position information of the first anatomical point in the first target image, and the first anatomical point and the second anatomical point are the same anatomical point on the target physiological tissue.
Optionally, the tags include a first type of tag and a second type of tag, wherein the first type of tag is used for marking an ablation-completed region in the predictive image, and the second type of tag is used for marking an non-ablated region in the predictive image, and the non-ablated region is a recurrence region.
Optionally, the recurrence location prediction apparatus further includes a model training module, wherein the model training module includes: the training data acquisition module is used for acquiring a training data set, wherein the training data set comprises a plurality of groups of training samples, each group of training samples in the plurality of groups of training samples comprises a first sample image, a second sample image and a reference image of the same case, the first sample image is an image obtained by scanning a target physiological tissue of a patient before an operation, the second sample image is a post-operation scanning image of the target physiological tissue registered with the first sample image, and the reference image is an image obtained by adding a label in the second sample image; the first training module is used for inputting a first sample image and a second sample image into a neural network model to be trained for each training sample in a plurality of groups of training samples and acquiring a prediction sample image output by the neural network model to be trained, wherein the prediction sample image comprises a label; and the second training module is used for adjusting the neural network model to be trained according to the prediction sample image and the reference image so as to obtain the target neural network model.
Optionally, the second training module further includes an adjustment module or an interaction module, where: the adjusting module is used for comparing the predicted sample image with the reference image to obtain a comparison result and adjusting the parameters of the neural network model to be trained according to the comparison result; and the interaction module is used for displaying the prediction sample image and the reference image to the second target object and adjusting the parameters of the neural network model to be trained according to the adjustment operation of the second target object.
Optionally, the comparison result comprises a similarity between the prediction sample image and the reference image; the adjusting module is further used for adjusting parameters of the neural network model to be trained under the condition that the similarity is lower than a preset similarity threshold, regenerating a prediction sample image after each adjustment, and comparing the regenerated prediction sample image with a reference image until the similarity in an obtained comparison result is not lower than the preset similarity threshold.
Optionally, the training data acquisition module is further configured to acquire a post-operation scanning image obtained after scanning the target physiological tissue of the target patient corresponding to different time points within a post-operation preset time period; registering each postoperative scanned image in the plurality of postoperative scanned images with a first sample image of a target patient to obtain a plurality of second sample images; and adding labels to the second sample images to obtain a plurality of reference images.
According to another aspect of the embodiments of the present application, there is also provided a non-volatile storage medium having a program stored therein, wherein the apparatus in which the non-volatile storage medium is located is controlled to execute the recurrence location prediction method when the program is executed.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including: a memory and a processor for executing a program stored in the memory, wherein the program when executed performs a recurrence location prediction method.
In the embodiment of the application, a first target image and a second target image are obtained, wherein the first target image is an image obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is an image obtained after scanning the target physiological tissue after the operation is completed; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; the method comprises the steps of analyzing a first target image and a third target image by adopting a target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a label, the label is used for determining a mode of a lesion recurrence area of a target physiological tissue in the predicted image, a registration image is obtained by registering a postoperative scanned image and a preoperative scanned image of a patient, and the neural network model is used for processing the preoperative scanned image and the registration image to obtain the predicted image for predicting the recurrence position, so that the aim of determining the recurrence position of the patient without manual radiograph reading is achieved, the technical effects of improving the recurrence position prediction accuracy and reducing the working pressure of a doctor are achieved, and the technical problem that whether the patient is relapsed or not cannot be accurately determined due to the fact that whether the patient is relapsed or not is determined by adopting the manual radiograph reading mode in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic structural diagram of a computer terminal (mobile terminal) according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram illustrating a recurrence location prediction method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a neural network model according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating a recurrence location prediction procedure according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a recurrence location prediction device according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another recurrence location prediction device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a second training module according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For a better understanding of the embodiments of the present application, technical terms referred to in the embodiments of the present application are explained below:
label image: refers to the output image corresponding to the golden standard image, i.e. the reference image in the training data set of the target neural network model, in which the ablation region and the non-ablation region are marked by the physician.
With the increasing medical level, multidisciplinary cross-combination cancer treatment modalities are becoming the mainstream of cancer treatment modalities such as lung cancer. Taking lung cancer as an example, for lung cancer patients, in addition to conventional treatment means such as surgery, chemotherapy and radiotherapy, the value of minimally invasive intervention (including at least one of radiotherapy, proton therapy, antibody therapy, surgery, valvular implantation, heat/ablation or gluing) in lung cancer treatment is gradually highlighted, and the value is widely acknowledged by the medical field. The follow-up of images after minimally invasive therapy of lung cancer is very important and cannot be easily seen, and the risk of postoperative recurrence needs to be predicted by timely review. In the related art, a manual film reading diagnosis is usually adopted to predict whether the postoperative is easy to relapse, so that the diagnosis level of a doctor is excessively depended, the subjectivity is high, and huge workload is brought to the doctor, so that the success rate of predicting the relapse position in the related art is low, and the relevance degree of the success rate with the personal experience of the doctor is high. In order to solve this problem, the embodiments of the present application provide related solutions, which are described in detail below.
In accordance with an embodiment of the present application, there is provided a method embodiment of a recurrence location prediction method, it is noted that the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer-executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Fig. 1 shows a hardware configuration block diagram of a computer terminal (or mobile device) for implementing the recurrence location prediction method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, \8230; 102 n) a processor 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission module 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of variable resistance termination paths connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the recurrence location prediction method in the embodiment of the present application, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implements the recurrence location prediction method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Under the operating environment, the embodiment of the present application provides a recurrence position prediction method, as shown in fig. 2, the method includes the following steps:
step S202, a first target image and a second target image are obtained, wherein the first target image is obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissue after the operation is finished;
in the technical solution provided in step S202, the target tissue is a tissue of a minimally invasive surgery that needs to perform operations such as ablation in a patient, for example, for a lung cancer patient, the target tissue is a lung, the first target image is a preoperative CT image of the lung, and the second target image is a postoperative CT image of the lung. In addition, it should be noted that the second target image should be a lung CT image acquired within one month after surgery or within 7 days after surgery. For example, when the second target image is acquired too long from the time of operation, the ablation region of the lung may be absorbed, resulting in data mismatch between the post-operative image and the pre-operative image.
Step S204, obtaining a third target image according to the first target image and the second target image, wherein a first anatomical point in the first target image is matched with a second anatomical point in the third target image;
in the technical solution provided in step S204, the step of registering the second target image to the first target image according to the first target image and the second target image to obtain a third target image includes: determining position information of a first anatomical point in a first target image in the first target image; and registering the second target image with the first target image according to the position information of the first anatomical point to obtain a third target image, wherein the position information of the second anatomical point in the third target image is the same as the position information of the first anatomical point in the first target image, and the first anatomical point and the second anatomical point are the same anatomical point on the target physiological tissue. For example, if an anatomical point a exists in the target physiological tissue, and a first anatomical point corresponding to the anatomical point a in the first target image is an anatomical point A1, and a second anatomical point corresponding to the anatomical point a in the third target image is an anatomical point A2, the position information of the anatomical point A2 in the third target image is the same as the position information of the anatomical point A1 in the first target image. That is, the first target image and the third target image may be regarded as images of the same portion of the target tissue at different time points and taken at the same angle, and the first target image and the third target image may be completely overlapped, provided that the target tissue is not changed before and after the operation.
In particular, medical image registration in the medical field generally refers to one or a series of spatial transformations performed on one medical image such that the image may be brought into spatial correspondence with corresponding points on another medical image. This coincidence means that the same anatomical point on the target tissue has the same relative spatial position in both images. The result of the registration should be such that all anatomical points on both images, or at least those that are diagnostically significant or surgically significant, match.
In image registration of the first target image and the second target image, any one of a plurality of registration methods may be employed, for example, a conventional image registration method or a neural network-based deep learning registration method may be employed. The registration process typically consists of an affine transformation and a deformable transformation. Among the conventional registration methods, a method of registering an image using ANTs (Advanced Normalization Tools) or Elastix (medical image registration Tools) is more commonly used. The ANTs can realize image registration by adopting a linear interpolation mode, and mutual information is used as an optimization criterion in the registration process. And the Elastix realizes image registration by adopting a B-spline interpolation mode, and attack matching information is taken as an optimization criterion in the registration process.
When the method of deep learning registration is adopted to perform image registration, a VoxelMorph (unsupervised deep learning medical image registration model) can be adopted to perform registration on the images.
And step S206, analyzing the first target image and the third target image by using the target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a tag, and the tag is used for determining a lesion recurrence region of the target physiological tissue in the predicted image.
In the technical solution provided in step S206, the tags include a first type of tag and a second type of tag, where the first type of tag is used to mark an ablation-completed region in the prediction image, and the second type of tag is used to mark an non-ablated region in the prediction image, and the non-ablated region is a recurrence region. For example, labels "0" and "1" may be employed, and the ablated region in the predicted image may be labeled as 0 and the non-ablated region may be labeled as 1.
In some embodiments of the present application, a method for training a target neural network model is further provided, where a specific flow of the method includes the following steps:
the method comprises the steps of firstly, obtaining a training data set, wherein the training data set comprises a plurality of groups of training samples, each group of training samples in the plurality of groups of training samples comprises a first sample image, a second sample image and a reference image of the same case, the first sample image is an image obtained by scanning a target physiological tissue of a patient before an operation, the second sample image is a post-operation scanning image of the target physiological tissue registered with the first sample image, and the reference image is an image obtained by adding a label in the second sample image;
in particular, multiple sets of training samples may be generated from related scan images of the same patient as the training data set is acquired. Specifically, the post-operation scanning images obtained after scanning the target physiological tissue of the target patient corresponding to different time points within the post-operation preset time period may be obtained, for example, the post-operation scanning images may be a post-operation scanning image for 7 days after the operation and a post-operation scanning image for one month after the operation; then registering each postoperative scanned image in the plurality of postoperative scanned images with the first sample image of the target patient to obtain a plurality of second sample images; finally, labels are added to the second sample images to obtain the reference images, so that the number of training samples in the training data set can be increased.
As an alternative embodiment, the pre-operative image and the post-operative image may be pre-processed after the pre-operative image and the post-operative image of the patient are acquired. Specific ways of pre-treatment include, but are not limited to: the method comprises the steps of reading, windowing, desensitizing and the like of DICOM (Digital Imaging and Communications in Medicine) data, eliminating other pixel objects in an image such as a bed board and the like, extracting a region of interest in an original image and the like. The region of interest in the original image refers to an image region containing target physiological tissues used in a subsequent prediction or training process.
Secondly, for each group of training samples in the plurality of groups of training samples, inputting a first sample image and a second sample image into the neural network model to be trained, and acquiring a prediction sample image output by the neural network model to be trained, wherein the prediction sample image comprises a label;
the prediction sample image is an image generated by the neural network model to be trained according to the first sample image and the second sample image, the ablation region is marked by using a first type label (label '0') in the prediction sample image, and the non-ablation region is marked by using a second type label (label '1').
And thirdly, adjusting the neural network model to be trained according to the prediction sample image and the reference image, thereby obtaining the target neural network model.
As an alternative embodiment, the step of adjusting the neural network model to be trained according to the prediction sample image and the reference image comprises: comparing the predicted sample image with the reference image to obtain a comparison result, and adjusting parameters of the neural network model to be trained according to the comparison result; or displaying the prediction sample image and the reference image to the second target object, and adjusting the parameters of the neural network model to be trained according to the adjustment operation of the second target object.
Wherein the second target object refers to a worker who is responsible for training the neural network model to be trained. That is to say, the neural network model to be trained provided in the embodiment of the present application can adjust parameters according to the comparison result by itself on the one hand, and on the other hand, a worker can manually adjust the parameters according to the comparison result.
It should be noted that the comparison result includes a similarity between the prediction sample image and the reference image, wherein when the neural network model to be trained automatically adjusts parameters, the step of adjusting the parameters of the neural network model to be trained by the model according to the comparison result includes: and under the condition that the similarity is lower than a preset similarity threshold, adjusting parameters of the neural network model to be trained, generating a prediction sample image again after each adjustment, and comparing the prediction sample image with the reference image until the similarity is not lower than the preset similarity threshold.
The final training is completed and the resulting target neural network model is shown in fig. 3. As can be seen from fig. 3, the model is composed of a plurality of convolution layers, the input end is a preoperative image and a registered image obtained by registering the postoperative image with the preoperative image, and the output end is a predicted image with an ablation region and a non-ablation region for marking. The doctor can quickly determine the recurrence position of the patient by predicting images, wherein the non-ablation region is the predicted recurrence position.
Specifically, the target neural network model shown in fig. 3 includes a convolutional layer, an activation function layer, a pooling layer, a fully-connected layer, and an output layer. The target neural network model may extract gradient information, edge information, and the like of the input image to output a prediction image. Specifically, a plurality of convolution layers are arranged in the target neural network model, so that image features with different dimensions are extracted. For example, some convolutional layers are responsible for extracting image features of lower dimensions, such as edge features, line features, corner features, and the like of an image, and other convolutional layers can iteratively extract more complex image features from the features of lower dimensions. The target neural network model determines an ablation region and a non-ablation region in the target physiological tissue by extracting image features in the preoperative image and the registration image, wherein the incomplete ablation region is a possible lesion recurrence region.
The embodiment of the present application further provides a procedure for predicting recurrence sites of lung cancer, fig. 4 is a schematic flow chart of the procedure, and as shown in fig. 4, the procedure includes the following steps:
s402, acquiring preoperative lung CT images of a plurality of patients, and preprocessing the images to obtain an image F;
s404, acquiring CT images of a plurality of patients within a preset time period after operation, and preprocessing the images to obtain an image M;
step S406, registering the image M of the same patient to the image F to obtain an image F
Step S408, image F and image F The focus areas in (1) are aligned and the image F is respectively marked by labels Obtaining a label image H by ablating uncovered areas and ablation areas;
step S410, image F of the same patient, image F And the image H is used as a group of training data, and the neural network model to be trained is trained through a plurality of groups of training data. The input of the neural network model to be trained is an image F, and the image F Outputting a predicted image with a label, and adjusting the parameters of the neural network model to be trained by comparing the predicted image with the image H;
step S412, a preoperative CT image and a postoperative CT image of a new patient are obtained, image registration is carried out after preprocessing to obtain a registration image, and the preoperative CT image and the registration image are input into the trained neural network model to obtain a predicted recurrence image.
The method comprises the steps of obtaining a first target image and a second target image, wherein the first target image is obtained after scanning target physiological tissues of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissues after the operation is finished; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; the method comprises the steps of analyzing a first target image and a third target image by adopting a target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a label, the label is used for determining a mode of a lesion recurrence area of a target physiological tissue in the predicted image, a registration image is obtained by registering a postoperative scanned image and a preoperative scanned image of a patient, and the neural network model is used for processing the preoperative scanned image and the registration image to obtain the predicted image for predicting the recurrence position, so that the aim of determining the recurrence position of the patient without manual radiograph reading is achieved, the technical effects of improving the recurrence position prediction accuracy and reducing the working pressure of a doctor are achieved, and the technical problem that whether the patient is relapsed or not cannot be accurately determined due to the fact that whether the patient is relapsed or not is determined by adopting the manual radiograph reading mode in the related technology is solved.
The embodiment of the present application provides a recurrence location prediction device, and fig. 5 is a schematic structural diagram of the recurrence location prediction device. As shown in fig. 5, the apparatus includes: an obtaining module 50, configured to obtain a first target image and a second target image, where the first target image is an image obtained after scanning a target tissue of a first target object before an operation, and the second target image is an image obtained after scanning the target tissue after the operation is completed; the first processing module 52 is configured to obtain a third target image according to the first target image and the second target image, where a first anatomical point in the first target image is matched with a second anatomical point in the third target image; and a second processing module 54, configured to process the first target image and the second target image by using the target neural network model, so as to obtain a predicted image output by the target neural network model, where the predicted image includes a tag, and the tag is used to determine a lesion recurrence region of the target physiological tissue in the predicted image.
In some embodiments of the present application, the first processing module 52 is further configured to: determining position information of a first anatomical point in a first target image in the first target image; and registering the second target image with the first target image according to the position information of the first anatomical point to obtain a third target image, wherein the position information of the second anatomical point in the third target image is the same as the position information of the first anatomical point in the first target image, and the first anatomical point and the second anatomical point are the same anatomical point on the target physiological tissue.
In some embodiments of the present application, the tags include a first type of tag and a second type of tag, wherein the first type of tag is used to mark an ablation-completed region in the predictive image, and the second type of tag is used to mark an unablated region in the predictive image, and the unablated region is a recurrent region.
In some embodiments of the present application, as shown in fig. 6, the recurrence position prediction apparatus further includes a model training module 56, wherein the model training module 56 includes: a training data obtaining module 58, configured to obtain a training data set, where the training data set includes multiple groups of training samples, each group of training samples in the multiple groups of training samples includes a first sample image, a second sample image, and a reference image, the first sample image is an image obtained by scanning a target physiological tissue of a patient before an operation, the second sample image is a post-operation scanned image of the target physiological tissue registered with the first sample image, and the reference image is an image obtained by adding a label to the second sample image; a first training module 510, configured to, for each training sample in the multiple sets of training samples, input a first sample image and a second sample image into a neural network model to be trained, and obtain a prediction sample image output by the neural network model to be trained, where the prediction sample image includes a label; and the second training module 512 is configured to adjust the neural network model to be trained according to the prediction sample image and the reference image, so as to obtain a target neural network model.
In some embodiments of the present application, as shown in fig. 7, the second training module 512 further includes an adjusting module 514 or an interacting module 516, where: an adjusting module 514, configured to compare the predicted sample image with the reference image to obtain a comparison result, and adjust a parameter of the neural network model to be trained according to the comparison result; and the interaction module 516 is configured to display the prediction sample image and the reference image to the second target object, and adjust a parameter of the neural network model to be trained according to an adjustment operation of the second target object.
In some embodiments of the present application, the comparison result comprises a similarity between the predicted sample image and the reference image; the adjusting module 514 is further configured to adjust parameters of the neural network model to be trained when the similarity is lower than the preset similarity threshold, regenerate the predicted sample image after each adjustment, and compare the regenerated predicted sample image with the reference image until the similarity in the comparison result is not lower than the preset similarity threshold.
In some embodiments of the present application, the training data obtaining module 58 is further configured to obtain post-operation scanning images obtained after scanning the target physiological tissue of the target patient corresponding to different time points within a post-operation preset time period; registering each postoperative scanned image in the plurality of postoperative scanned images with a first sample image of a target patient to obtain a plurality of second sample images; and adding labels to the second sample images to obtain a plurality of reference images.
The modules in the recurrence location prediction apparatus may be program modules (for example, a set of program instructions for implementing a specific function), or may be hardware modules, and in the latter case, the following forms may be presented, but are not limited to the following forms: the above modules are all represented by one processor, or the functions of the above modules are realized by one processor.
The embodiment of the application provides a nonvolatile storage medium. The non-volatile storage medium stores a program, and the device where the non-volatile storage medium is controlled to execute the following recurrence position prediction method when the program runs: acquiring a first target image and a second target image, wherein the first target image is an image obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is an image obtained after scanning the target physiological tissue after the operation is finished; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; and analyzing the first target image and the third target image by adopting the target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a tag which is used for determining a lesion recurrence region of the target physiological tissue in the predicted image.
The embodiment of the application provides electronic equipment. The electronic device comprises a memory and a processor, wherein the processor is used for operating a program stored in the memory, and the program executes the following recurrence position prediction method: acquiring a first target image and a second target image, wherein the first target image is an image obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is an image obtained after scanning the target physiological tissue after the operation is finished; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; and analyzing the first target image and the third target image by adopting the target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a tag which is used for determining a lesion recurrence region of the target physiological tissue in the predicted image.
The embodiment of the application provides a computer program product, which comprises a computer program. The computer program, when executed by the processor, implements a recurrence location prediction method as follows: acquiring a first target image and a second target image, wherein the first target image is obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissue after the operation is finished; obtaining a third target image according to the first target image and the second target image, wherein the first anatomical point in the first target image is matched with the second anatomical point in the third target image; and analyzing the first target image and the third target image by adopting a target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a label, and the label is used for determining a lesion recurrence area of a target physiological tissue in the predicted image.
In the embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A recurrence location prediction method, comprising:
acquiring a first target image and a second target image, wherein the first target image is an image obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is an image obtained after scanning the target physiological tissue after the operation is completed;
obtaining a third target image according to the first target image and the second target image, wherein a first anatomical point in the first target image is matched with a second anatomical point in the third target image;
and analyzing the first target image and the third target image by adopting a target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a label, and the label is used for determining a lesion recurrence area of a target physiological tissue in the predicted image.
2. A recurrence location prediction device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first target image and a second target image, the first target image is obtained after scanning a target physiological tissue of a first target object before an operation, and the second target image is obtained after scanning the target physiological tissue after the operation is completed;
the first processing module is used for obtaining a third target image according to the first target image and the second target image, wherein a first anatomical point in the first target image is matched with a second anatomical point in the third target image;
and the second processing module is used for processing the first target image and the second target image by adopting a target neural network model to obtain a predicted image output by the target neural network model, wherein the predicted image comprises a tag which is used for determining a lesion recurrence region of a target physiological tissue in the predicted image.
3. The recurrence location prediction device of claim 2, wherein the first processing module is further configured to:
determining position information of the first anatomical point in the first target image;
and registering the second target image with the first target image according to the position information of the first anatomical point to obtain a third target image, wherein the position information of a second anatomical point in the third target image is the same as the position information of the first anatomical point in the first target image, and the first anatomical point and the second anatomical point are the same anatomical point on the target physiological tissue.
4. The recurrence position prediction apparatus according to claim 2, wherein the tags comprise a first class of tags and a second class of tags, wherein the first class of tags is used for marking ablation-completed regions in the prediction image, and the second class of tags is used for marking non-ablated regions in the prediction image, and the non-ablated regions are the recurrence regions.
5. The recurrence location prediction apparatus according to claim 2, further comprising a model training module, wherein the model training module comprises:
a training data acquisition module, configured to acquire a training data set, where the training data set includes multiple sets of training samples, each set of training samples in the multiple sets of training samples includes a first sample image, a second sample image, and a reference image, the first sample image is an image obtained by scanning the target tissue of a patient before an operation, the second sample image is a post-operation scanned image of the target tissue registered with the first sample image, and the reference image is an image obtained by adding the tag to the second sample image;
the first training module is used for inputting the first sample image and the second sample image into a neural network model to be trained and acquiring a prediction sample image output by the neural network model to be trained for each training sample in the plurality of groups of training samples, wherein the prediction sample image comprises the label;
and the second training module is used for adjusting the neural network model to be trained according to the prediction sample image and the reference image so as to obtain the target neural network model.
6. The recurrence location prediction device of claim 5, wherein the second training module further comprises an adjustment module or an interaction module, wherein:
the adjusting module is used for comparing the prediction sample image with the reference image to obtain a comparison result, and adjusting the parameters of the neural network model to be trained according to the comparison result;
and the interaction module is used for displaying the prediction sample image and the reference image to a second target object and adjusting the parameters of the neural network model to be trained according to the adjustment operation of the second target object.
7. The recurrence position prediction apparatus according to claim 6, wherein the comparison result includes a degree of similarity between the prediction sample image and the reference image; the adjusting module is further configured to adjust parameters of the neural network model to be trained when the similarity is lower than a preset similarity threshold, regenerate a prediction sample image after each adjustment, and compare the regenerated prediction sample image with the reference image until the similarity in the obtained comparison result is not lower than the preset similarity threshold.
8. The recurrence position prediction device of claim 7, wherein the training data acquiring module is further configured to acquire post-operation scanning images obtained after scanning the target physiological tissue of the target patient corresponding to different time points within a post-operation preset time period; registering each of the plurality of post-operative scan images with the first sample image of the target patient, resulting in a plurality of second sample images; and adding the label to a plurality of second sample images to obtain a plurality of reference images.
9. A non-volatile storage medium, wherein a program is stored in the non-volatile storage medium, and wherein when the program runs, a device in which the non-volatile storage medium is located is controlled to execute the recurrence location prediction method according to claim 1.
10. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program when executed performs the recurrence location prediction method of claim 1.
CN202310260320.XA 2023-03-17 2023-03-17 Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device Pending CN115966309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310260320.XA CN115966309A (en) 2023-03-17 2023-03-17 Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310260320.XA CN115966309A (en) 2023-03-17 2023-03-17 Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN115966309A true CN115966309A (en) 2023-04-14

Family

ID=87358705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310260320.XA Pending CN115966309A (en) 2023-03-17 2023-03-17 Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN115966309A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476246A (en) * 2023-12-25 2024-01-30 福建大数据一级开发有限公司 Patient survival analysis method, medium and device based on multi-type recurrent events

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660481A (en) * 2019-09-27 2020-01-07 颐保医疗科技(上海)有限公司 Artificial intelligence technology-based primary liver cancer recurrence prediction method
CN113284126A (en) * 2021-06-10 2021-08-20 安徽省立医院(中国科学技术大学附属第一医院) Method for predicting hydrocephalus shunt operation curative effect by artificial neural network image analysis
WO2021234304A1 (en) * 2020-05-20 2021-11-25 Quantum Surgical Method for predicting the recurrence of a lesion by image analysis
CN113994380A (en) * 2020-05-20 2022-01-28 康坦手术股份有限公司 Ablation region determination method based on deep learning
CN115410709A (en) * 2022-08-25 2022-11-29 新疆医科大学第一附属医院 Prediction system for recurrence and metastasis after liver cancer radiofrequency ablation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660481A (en) * 2019-09-27 2020-01-07 颐保医疗科技(上海)有限公司 Artificial intelligence technology-based primary liver cancer recurrence prediction method
WO2021234304A1 (en) * 2020-05-20 2021-11-25 Quantum Surgical Method for predicting the recurrence of a lesion by image analysis
CN113994380A (en) * 2020-05-20 2022-01-28 康坦手术股份有限公司 Ablation region determination method based on deep learning
CN113284126A (en) * 2021-06-10 2021-08-20 安徽省立医院(中国科学技术大学附属第一医院) Method for predicting hydrocephalus shunt operation curative effect by artificial neural network image analysis
CN115410709A (en) * 2022-08-25 2022-11-29 新疆医科大学第一附属医院 Prediction system for recurrence and metastasis after liver cancer radiofrequency ablation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
麻凯利,王川川: ""基于深度学习的信源数估计方法"", 《航天电子对抗》, no. 3, pages 135 - 11 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117476246A (en) * 2023-12-25 2024-01-30 福建大数据一级开发有限公司 Patient survival analysis method, medium and device based on multi-type recurrent events
CN117476246B (en) * 2023-12-25 2024-04-19 福建大数据一级开发有限公司 Patient survival analysis method, medium and device based on multi-type recurrent events

Similar Documents

Publication Publication Date Title
Jaskari et al. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes
US20230123842A1 (en) Method for predicting morphological changes of liver tumor after ablation based on deep learning
JP5814504B2 (en) Medical image automatic segmentation system, apparatus and processor using statistical model
CN109801272B (en) Liver tumor automatic segmentation positioning method, system and storage medium
CN111161270A (en) Blood vessel segmentation method for medical image, computer device and readable storage medium
CN112885453A (en) Method and system for identifying pathological changes in subsequent medical images
US20140341449A1 (en) Computer system and method for atlas-based consensual and consistent contouring of medical images
CN111160367A (en) Image classification method and device, computer equipment and readable storage medium
US20240006053A1 (en) Systems and methods for planning medical procedures
US10219768B2 (en) Method for standardizing target lesion selection and tracking on medical images
CN103678837A (en) Method and device for determining processing remains of target area
CN115966309A (en) Recurrence position prediction method, recurrence position prediction device, nonvolatile storage medium, and electronic device
CN111904379A (en) Scanning method and device of multi-modal medical equipment
Zhang et al. Learning-based coronal spine alignment prediction using smartphone-acquired scoliosis radiograph images
CN113689937A (en) Image annotation method, storage medium and processor
CN113159040A (en) Method, device and system for generating medical image segmentation model
CN113808125A (en) Medical image processing method, focus type identification method and related product
CN112561877A (en) Multi-scale double-channel convolution model training method, image processing method and device
WO2008141293A9 (en) Image segmentation system and method
CN112308764A (en) Image registration method and device
CN113706559A (en) Blood vessel segmentation extraction method and device based on medical image
CN112515767A (en) Surgical navigation device, surgical navigation apparatus, and computer-readable storage medium
CN106096322B (en) Liver and kidney medical image data cooperative processing system
Wimmer et al. Fully automatic cross-modality localization and labeling of vertebral bodies and intervertebral discs in 3D spinal images
CN114283921A (en) System and storage medium for determining tumor target area contour set after drug treatment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230414