WO2021082416A1 - 网络模型训练方法及装置、病灶区域确定方法及装置 - Google Patents

网络模型训练方法及装置、病灶区域确定方法及装置 Download PDF

Info

Publication number
WO2021082416A1
WO2021082416A1 PCT/CN2020/092570 CN2020092570W WO2021082416A1 WO 2021082416 A1 WO2021082416 A1 WO 2021082416A1 CN 2020092570 W CN2020092570 W CN 2020092570W WO 2021082416 A1 WO2021082416 A1 WO 2021082416A1
Authority
WO
WIPO (PCT)
Prior art keywords
network model
training data
lesion area
model
medical image
Prior art date
Application number
PCT/CN2020/092570
Other languages
English (en)
French (fr)
Inventor
王少康
陈宽
Original Assignee
北京推想科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京推想科技有限公司 filed Critical 北京推想科技有限公司
Publication of WO2021082416A1 publication Critical patent/WO2021082416A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the present disclosure relates to the field of image processing technology, in particular to a network model training method and device, a lesion area determination method and device, a computer-readable storage medium, and electronic equipment.
  • the embodiments of the present application provide a network model training method and device, a lesion area determination method and device, a computer-readable storage medium, and electronic equipment.
  • an embodiment of the present disclosure provides a network model training method.
  • the network model training method includes: determining first training data based on a sample image, wherein the sample image includes a lesion area, and the first training data includes a labeled first The coordinate information of the lesion area and the information of the first lesion type; an initial network model is determined, and the initial network model is trained based on the sample image to generate a network model for determining the lesion area in the medical image.
  • an embodiment of the present disclosure provides a method for determining a lesion area.
  • the method for determining a lesion area includes: determining a medical image of a lesion area that needs to be determined; and inputting the medical image to a network for determining the lesion area in the medical image
  • the model is used to determine the coordinate information of the lesion area of the medical image, wherein the network model used to determine the lesion area in the medical image can be obtained based on the network model training method mentioned in the above embodiment.
  • an embodiment of the present disclosure provides a network model training device.
  • the network model training device includes: a first training data determining module for determining first training data based on a sample image, wherein the sample image includes a lesion area ,
  • the first training data includes the coordinate information of the marked first lesion area and the first lesion type information;
  • the training module is used to determine the initial network model, and train the initial network model based on the sample image to generate the information used to determine the medical image Network model of the lesion area.
  • an embodiment of the present disclosure provides an apparatus for determining a lesion area.
  • the apparatus for determining a lesion area includes: an image determining module for determining a medical image of a lesion area that needs to be determined; a lesion area determining module for comparing the medical image Input to the network model used to determine the focus area in the medical image to determine the focus area coordinate information of the medical image, where the network model used to determine the focus area in the medical image can be based on the network model training mentioned in the above embodiment Method to obtain.
  • the embodiments of the present disclosure provide a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the network model training method mentioned in the above-mentioned embodiment, or to execute the above-mentioned embodiment The mentioned method of determining the lesion area.
  • an embodiment of the present disclosure provides an electronic device, the electronic device includes a processor and a memory for storing executable instructions of the processor, wherein the processor is used for executing the network model mentioned in the above embodiment Training method, or execute the method for determining the lesion area mentioned in the above embodiment.
  • the network model training method determines the first training data based on the sample image, then determines the initial network model, and trains the initial network model based on the sample image to generate a network model for determining the lesion area in the medical image
  • the method achieves the purpose of training an initial network model using sample images to generate a network model for determining the lesion area in a medical image.
  • the sample image is a medical image that includes the lesion area
  • the first training data determined based on the sample image includes the labeled first lesion area coordinate information and the first lesion type information, therefore, the training generated based on the sample image is used to determine the medical image
  • the network model of the lesion area can be used to assist the doctor in determining the lesion area in any medical image of the same type as the sample image.
  • using the network model training method provided by the embodiments of the present disclosure to train and generate the network model for determining the lesion area in the medical image can assist the doctor in determining the lesion area in the medical image of the same type as the sample image (such as determining the lesion area).
  • the method for determining the focus area provided by the embodiment of the present disclosure realizes the determination by inputting the medical image that needs to determine the focus area into the network model for determining the focus area in the medical image to determine the focus area coordinate information of the medical image.
  • the purpose of the coordinate information of the lesion area in the medical image Since the method for determining the lesion area provided by the embodiment of the present disclosure is implemented based on the network model used to determine the lesion area in the medical image, compared with the existing solution, the embodiment of the present disclosure does not need to determine the medical image of the lesion area. Perform complex image enhancement, filter transformation and other processing operations, thereby avoiding the failure of predicting the coordinate information of the lesion area due to factors such as image quality. That is, the method for determining the lesion area provided by the embodiment of the present disclosure has the advantages of high stability and good robustness.
  • FIG. 1 is a schematic diagram of a scenario to which the embodiments of the present disclosure are applicable.
  • FIG. 2 is a schematic diagram of another scenario to which the embodiments of the present disclosure are applicable.
  • Fig. 3 is a schematic flowchart of a network model training method provided by an exemplary embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of training an initial network model based on sample images to generate a network model for determining a lesion area in a medical image according to an exemplary embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of training an initial network model based on sample images to generate a network model for determining a lesion area in a medical image according to another exemplary embodiment of the present disclosure.
  • FIG. 6 shows the method of adjusting the network parameters of the prediction model and the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data provided by an exemplary embodiment of the present disclosure.
  • FIG. 7 is a schematic flowchart of training an initial network model based on sample images to generate a network model for determining a lesion area in a medical image according to another exemplary embodiment of the present disclosure.
  • Fig. 8 is a schematic structural diagram of an initial network model provided by an exemplary embodiment of the present disclosure.
  • FIG. 9 shows the first parameter adjustment operation performed on the initial network model based on the first training data and the second training data provided by an exemplary embodiment of the present disclosure to generate a network model for determining the lesion area in a medical image Schematic diagram of the process.
  • FIG. 10 is a schematic diagram of a process of determining first training data based on sample images according to an exemplary embodiment of the present disclosure.
  • FIG. 11 is a schematic flowchart of a method for determining a lesion area provided by an exemplary embodiment of the present disclosure.
  • FIG. 12 is a schematic flowchart of a method for determining a lesion area according to another exemplary embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of region division of a medical image provided by an exemplary embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram of a process of performing a region division operation on a medical image to generate multiple divided regions according to an exemplary embodiment of the present disclosure.
  • FIG. 15 is a schematic flowchart of determining the positional relationship between the lesion area and multiple divided areas based on the coordinate information of the lesion area according to an exemplary embodiment of the present disclosure.
  • FIG. 16 is a schematic diagram of positioning the lesion area of the medical image including the lung field area based on the area division shown in FIG. 13.
  • Fig. 17 is a schematic structural diagram of a network model training device provided by an exemplary embodiment of the present disclosure.
  • Fig. 18 is a schematic structural diagram of a training module provided by an exemplary embodiment of the present disclosure.
  • FIG. 19 is a schematic structural diagram of a training module provided by another exemplary embodiment of the present disclosure.
  • Fig. 20 is a schematic structural diagram of a first training subunit provided by an exemplary embodiment of the present disclosure.
  • FIG. 21 is a schematic structural diagram of a training module provided by another exemplary embodiment of the present disclosure.
  • Fig. 22 is a schematic structural diagram of a training unit provided by an exemplary embodiment of the present disclosure.
  • FIG. 23 is a schematic structural diagram of a first training data determining module provided by an exemplary embodiment of the present disclosure.
  • FIG. 24 is a schematic structural diagram of a device for determining a lesion area provided by an exemplary embodiment of the present disclosure.
  • FIG. 25 is a schematic structural diagram of an apparatus for determining a focus area provided by another exemplary embodiment of the present disclosure.
  • FIG. 26 is a schematic structural diagram of a divided area generating module provided by an exemplary embodiment of the present disclosure.
  • Fig. 27 is a schematic structural diagram of a position relationship determining module provided by an exemplary embodiment of the present disclosure.
  • FIG. 28 is a schematic structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
  • Medical imaging is an image that uses a certain medium (such as X-rays, electromagnetic fields, ultrasound, etc.) to interact with the human body or animal body to present information such as the structure and density of the internal tissues and organs of the human body or animal body.
  • a certain medium such as X-rays, electromagnetic fields, ultrasound, etc.
  • medical imaging is mainly divided into anatomical structure images describing physiological forms and functional images describing human or animal body functions or metabolic functions.
  • medical imaging is an important tool for disease prevention and treatment.
  • the anatomical structure images describing physiological morphology mainly include X-ray images, Computed Tomography (CT) images, and Magnetic Resonance Imaging (MRI).
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • the imaging principle based on X-ray images can be divided into computer radiography (CR) and digital radiography (DR).
  • CR computer radiography
  • DR digital radiography
  • the anatomical structure images describing the physiological morphology can clearly show the morphology and pathological conditions of the tissues and organs, which helps to determine the location information of the lesions in the tissues and organs and the type of lesions, and then provide accurate disease treatment
  • the program provides prerequisites.
  • tuberculosis is a chronic infectious disease caused by Mycobacterium tuberculosis, which can invade many organs, and its harmfulness is self-evident.
  • tuberculosis there are six types of tuberculosis of tuberculosis including primary tuberculosis, hematologically disseminated tuberculosis, secondary tuberculosis, tracheobronchial tuberculosis, tuberculous pleurisy and old tuberculosis.
  • the basic idea of the present disclosure is to propose a network model training method and device, a lesion area determination method and device, a computer-readable storage medium, and electronic equipment.
  • the network model training method determines the first training data based on the sample image, then determines the initial network model, and trains the initial network model based on the sample image to generate a network model for determining the lesion area in the medical image.
  • the sample image trains the initial network model for the purpose of generating a network model for determining the lesion area in the medical image. Since the sample image is a medical image that includes the lesion area, the first training data determined based on the sample image includes the labeled first lesion area coordinate information and the first lesion type information, therefore, the training generated based on the sample image is used to determine the medical image
  • the network model of the lesion area can be used to assist the doctor in determining the lesion area in any medical image of the same type as the sample image.
  • the embodiments of the present disclosure can determine the lesion area in the medical image of the same type as the sample image (for example, determining that the lesion area corresponds to Therefore, compared with the prior art, the embodiments of the present disclosure can effectively improve the efficiency and accuracy of determining the lesion area.
  • the method for determining the focus area realizes the determination of the focus area in the medical image by inputting the medical image for determining the focus area into the network model for determining the focus area in the medical image to determine the focus area coordinate information of the medical image
  • the purpose of the coordinate information Since the method for determining the lesion area provided by the embodiment of the present disclosure is implemented based on the network model used to determine the lesion area in the medical image, compared with the existing solution, the embodiment of the present disclosure does not need to determine the medical image of the lesion area. Perform complex image enhancement, filter transformation and other processing operations, thereby avoiding the failure of predicting the coordinate information of the lesion area due to factors such as image quality. That is, the method for determining the lesion area provided by the embodiment of the present disclosure has the advantages of high stability and good robustness.
  • FIG. 1 is a schematic diagram of a scenario to which the embodiments of the present disclosure are applicable.
  • the applicable scenario of the embodiment of the present disclosure includes a server 1 and an image acquisition device 2, where there is a communication connection relationship between the server 1 and the image acquisition device 2.
  • the image acquisition device 2 is used to acquire a medical image including the lesion area as a sample image
  • the server 1 is used to determine the first training data based on the sample image collected by the image acquisition device 2, and then determine the initial network model, and based on the sample
  • the initial network model is trained on the image to generate a network model for determining the lesion area in the medical image, wherein the first training data includes the first lesion area coordinate information and the first lesion type information. That is, this scenario implements a network model training method.
  • the image acquisition device 2 is used to acquire medical images that need to determine the lesion area
  • the server 1 is used to input the medical image collected by the image acquisition device 2 into a network model for determining the lesion area in the medical image, so as to determine the medical image Coordinate information of the lesion area. That is, this scenario implements a method for determining the lesion area.
  • the network model used to determine the lesion area in the medical image may be the network model generated in the above scene and used to determine the lesion area in the medical image. Since the above-mentioned scene shown in FIG. 1 uses the server 1 to implement the network model training method and/or the focus area determination method, not only can the adaptability of the scene be improved, but also the calculation amount of the image acquisition device 2 can be effectively reduced.
  • FIG. 2 is a schematic diagram of another scenario to which the embodiments of the present disclosure are applicable.
  • the image processing device 3 is included in the scene, and the image processing device 3 includes an image acquisition module 31 and a calculation module 32.
  • the image acquisition module 31 in the image processing device 3 is used to acquire a medical image including the lesion area as a sample image
  • the calculation module 32 in the image processing device 3 is used to determine the first image based on the sample image collected by the image acquisition module 31 A training data, and then determine an initial network model, and train the initial network model based on the sample image to generate a network model for determining the lesion area in the medical image, wherein the first training data includes the coordinate information of the first lesion area labeled And the first lesion type information. That is, this scenario implements a network model training method.
  • the image acquisition module 31 in the image processing device 3 is used to acquire medical images that need to determine the lesion area
  • the calculation module 32 in the image processing device 3 is used to input the medical images acquired by the image acquisition module 31 into the medical image for determining the medical image.
  • the network model of the lesion area to determine the coordinate information of the lesion area of the medical image. That is, this scenario implements a method for determining the lesion area.
  • the network model used to determine the lesion area in the medical image may be the network model generated in the above scene and used to determine the lesion area in the medical image. Since the above scenario shown in FIG.
  • the 2 uses the image processing device 3 to implement the network model training method and/or the lesion area determination method, there is no need to perform data transmission operations with related devices such as servers. Therefore, the above scenario can ensure the network model training method or the lesion area.
  • the real-time nature of the area determination method is the real-time nature of the area determination method.
  • the image acquisition device 2 and the image acquisition module 31 mentioned in the above scenario include, but are not limited to, image acquisition devices such as X-ray machines, CT scanners, and MRI equipment.
  • the medical images collected by the image acquisition device 2 and the image acquisition module 31 mentioned in the above scenes include, but are not limited to, X-ray images, CT images, MRI images, etc., which can capture the internal tissues and organs of the human or animal body, A medical image in which information such as density is presented as an image.
  • the network model training method and the lesion area determination method provided by the embodiments of the present disclosure are not limited to the above-mentioned applicable scenarios of medical images. As long as the application scenarios determined based on the feature area are involved, they belong to the implementation of the present disclosure. The scope of application of the example. For example, monitoring the definite scene of the region of interest in the image.
  • Fig. 3 is a schematic flowchart of a network model training method provided by an exemplary embodiment of the present disclosure. As shown in FIG. 3, the network model training method provided by the embodiment of the present disclosure includes the following steps.
  • Step 10 Determine the first training data based on the sample image, where the sample image includes the lesion area, and the first training data includes the labeled first lesion area coordinate information and the first lesion type information.
  • the sample image is a medical image including a lesion area.
  • Step 20 Determine an initial network model, and train the initial network model based on the sample image to generate a network model for determining the lesion area in the medical image.
  • the initial network model is a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • CNN convolutional Neural Networks
  • the model structure of the initial network model and the network model used to determine the lesion area in the medical image are the same, and the difference between the initial network model and the network model used to determine the lesion area in the medical image is model Differences in network parameters. That is, the network parameters in the initial network model are the initial network parameters, and then the sample images are used to train the initial network model. During the training process, the initial network parameters are adjusted to finally generate the network model used to determine the lesion area in the medical image. Network parameters. For example, the network parameters of the initial network model are continuously adjusted based on the gradient descent method to finally generate the network parameters in the network model used to determine the lesion area in the medical image.
  • the first training data is first determined based on the sample image, and then the initial network model is determined, and the initial network model is trained based on the sample image to generate a network model for determining the lesion area in the medical image.
  • the network model training method determines the first training data based on the sample image, then determines the initial network model, and trains the initial network model based on the sample image to generate a network model for determining the lesion area in the medical image
  • the method achieves the purpose of training an initial network model using sample images to generate a network model for determining the lesion area in a medical image.
  • the sample image is a medical image that includes the lesion area
  • the first training data determined based on the sample image includes the labeled first lesion area coordinate information and the first lesion type information, therefore, the training generated based on the sample image is used to determine the medical image
  • the network model of the lesion area can be used to assist the doctor in determining the lesion area in any medical image of the same type as the sample image.
  • using the network model training method provided by the embodiments of the present disclosure to train and generate the network model for determining the lesion area in the medical image can assist the doctor in determining the lesion area in the medical image of the same type as the sample image (such as determining the lesion area).
  • the medical images of the same type as the sample image mentioned above refer to that the tissues and organs included in the medical image are of the same type as the tissues and organs in the sample image.
  • the sample image is a chest radiograph image including the lung field area of the human body
  • the medical image is also a chest radiograph image including the lung field area of the human body.
  • the sample image is a head image including a human brain region
  • the medical image is also a head image including a human brain region.
  • the sample image is a lung image including a pulmonary tuberculosis lesion area
  • the first lesion type information includes primary pulmonary tuberculosis, blood disseminated pulmonary tuberculosis, secondary pulmonary tuberculosis, tracheobronchial tuberculosis, At least one of tuberculous pleurisy and old tuberculosis.
  • the network model for determining the lesion area in the medical image determined by the above embodiment can be used to achieve the purpose of predicting the coordinate information of the lesion area of the medical image including the pulmonary tuberculosis lesion area.
  • FIG. 4 is a schematic flowchart of training an initial network model based on sample images to generate a network model for determining a lesion area in a medical image according to an exemplary embodiment of the present disclosure.
  • the embodiment shown in Fig. 4 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 4 and the embodiment shown in Fig. 3, and the similarities are not repeated here. .
  • training an initial network model based on a sample image to generate a network model for determining a lesion area in a medical image includes the following steps.
  • Step 21 Input the sample image into the initial network model to determine second training data corresponding to the first training data, where the second training data includes the second lesion area coordinate information and the second lesion type information.
  • the second training data refers to the training data corresponding to the sample image determined by the initial network model after the sample image is input to the initial network model (wherein, the training data includes the focus area coordinate information and the focus type information).
  • Step 22 Perform a first parameter adjustment operation on the initial network model based on the first training data and the second training data to generate a network model for determining the lesion area in the medical image.
  • the specific network parameters adjusted in the first parameter adjustment operation can be determined according to actual conditions, including but not limited to learning rate, image size, etc.
  • first determine the first training data based on the sample image then determine the initial network model, input the sample image to the initial network model to determine the second training data corresponding to the first training data, and based on the first training
  • the data and the second training data perform a first parameter adjustment operation on the initial network model to generate a network model for determining the lesion area in the medical image.
  • the sample image is input to the initial network model to determine the second training data corresponding to the first training data, and then the initial network model is compared based on the first training data and the second training data.
  • Perform the first parameter adjustment operation to generate a network model for determining the focus area in the medical image, and realize the training of the initial network model based on the sample image to generate the network model for determining the focus area in the medical image purpose. Since the first training data is pre-labeled and the second training data is determined based on the initial network model, the difference between the first training data and the second training data can characterize the prediction accuracy of the initial network model.
  • the embodiments of the present disclosure after the first parameter adjustment operation is performed on the initial network model based on the first training data and the second training data, the error range between the first training data and the second training data can be effectively reduced. Therefore, the embodiments of the present disclosure can effectively improve the prediction accuracy of the finally generated network model for determining the lesion area in the medical image.
  • FIG. 5 is a schematic flowchart of training an initial network model based on sample images to generate a network model for determining a lesion area in a medical image according to another exemplary embodiment of the present disclosure.
  • the embodiment shown in Fig. 5 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 5 and the embodiment shown in Fig. 4, and the similarities will not be repeated. .
  • the initial network model includes a signal-connected image feature extraction model and a prediction model.
  • the image feature extraction model is used to extract image feature information of the medical image
  • the prediction model is used to predict information corresponding to the medical image. Training data.
  • the initial network model is subjected to the first parameter adjustment operation based on the first training data and the second training data to generate a medical image for determining
  • the steps of the network model of the lesion area in include the following steps.
  • Step 221 Based on the second lesion type information in the second training data and the first lesion type information in the first training data, the network parameters of the prediction model and the network parameters of the image feature extraction model are adjusted to generate the information used to determine the medical image. Network model of the lesion area.
  • first determine the first training data based on the sample image then determine the initial network model, and input the sample image into the initial network model to determine the second training data corresponding to the first training data, and then based on the second training
  • the second lesion type information in the data and the first lesion type information in the first training data adjust the network parameters of the prediction model and the network parameters of the image feature extraction model to generate a network model for determining the lesion area in the medical image.
  • the network model training method provided by the embodiments of the present disclosure adjusts the network parameters of the prediction model and the network parameters of the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data
  • the method achieves the purpose of performing the first parameter adjustment operation on the initial network model based on the first training data and the second training data. Since the lesion type information can help determine the coordinate information of the lesion area, the embodiments of the present disclosure can further improve the accuracy of the determined coordinate information of the lesion area, thereby improving the positioning accuracy of the lesion area.
  • FIG. 6 shows the method of adjusting the network parameters of the prediction model and the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in Fig. 6 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 6 and the embodiment shown in Fig. 5, and the similarities will not be repeated. .
  • the prediction model includes a coordinate information prediction sub-model and a type information prediction sub-model.
  • the coordinate information prediction sub-model is used to predict the coordinate information of the lesion area
  • the type information prediction sub-model is used to predict the type information of the lesion.
  • the network parameters of the prediction model are adjusted based on the second lesion type information in the second training data and the first lesion type information in the first training data.
  • the network parameter step of the image feature extraction model includes the following steps.
  • Step 2211 Adjust the network parameters of the type information prediction sub-model in the prediction model based on the second focus type information and the first focus type information.
  • the network of the type information prediction sub-model in the prediction model can be adjusted based on the second lesion type information and the pre-labeled first lesion type information Parameters to further improve the prediction accuracy of the type information prediction sub-model.
  • Step 2212 Adjust the network parameters of the image feature extraction model based on the adjusted type information prediction sub-model.
  • the sub-model adjusts the network parameter operation of the image feature extraction model, which can further improve the accuracy of the image feature information extracted by the image feature extraction model.
  • Step 2213 Adjust the network parameters of the coordinate information prediction sub-model in the prediction model based on the adjusted image feature extraction model.
  • the coordinate information prediction sub-model in the prediction model uses the image feature information predicted by the image feature extraction model as input data, it is based on the adjusted image
  • the feature extraction model adjusts the network parameter operation of the coordinate information prediction sub-model in the prediction model, which can further improve the accuracy of the coordinate information of the lesion area determined by the coordinate information prediction sub-model.
  • first determine the first training data based on the sample image then determine the initial network model, and input the sample image into the initial network model to determine the second training data corresponding to the first training data, and then based on the second lesion Type information and first lesion type information adjust the network parameters of the type information prediction sub-model in the prediction model, and adjust the network parameters of the image feature extraction model based on the adjusted type information prediction sub-model, and based on the adjusted image feature extraction model
  • the network parameters of the coordinate information prediction sub-model in the prediction model are adjusted to generate a network model for determining the lesion area in the medical image.
  • the network model training method adjusts the network parameters of the prediction sub-model based on the type information of the second lesion type information and the first lesion type information, and then adjusts the image based on the adjusted type information prediction sub-model.
  • the network parameters of the feature extraction model, and based on the adjusted image feature extraction model, the coordinate information in the prediction model is adjusted to predict the network parameters of the sub-model, so that the second lesion type information in the second training data and the first training are realized
  • the purpose of adjusting the network parameters of the prediction model and the network parameters of the image feature extraction model for the first lesion type information in the data Based on the analysis content of the foregoing embodiment, it can be known that the embodiment of the present disclosure can further improve the accuracy of the determined coordinate information of the lesion area.
  • FIG. 7 is a schematic flowchart of training an initial network model based on sample images to generate a network model for determining a lesion area in a medical image according to another exemplary embodiment of the present disclosure.
  • the embodiment shown in Fig. 7 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 7 and the embodiment shown in Fig. 5, and the similarities will not be repeated. .
  • the initial network model is adjusted for the first time based on the first training data and the second training data, so as to generate the parameters used to determine the medical image.
  • the steps of the network model of the lesion area include the following steps.
  • Step 2214 Adjust network parameters of the prediction model and network parameters of the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data, and based on the first lesion type information in the second training data
  • the second focus area coordinate information and the first focus area coordinate information in the first training data adjust the network parameters of the image feature extraction model to generate a network model for determining the focus area in the medical image.
  • first determine the first training data based on the sample image then determine the initial network model, and input the sample image into the initial network model to determine the second training data corresponding to the first training data, and then based on the second training
  • the second lesion type information in the data and the first lesion type information in the first training data adjust the network parameters of the prediction model and the network parameters of the image feature extraction model, based on the second lesion area coordinate information in the second training data and the first
  • the coordinate information of the first lesion area in the training data adjusts the network parameters of the image feature extraction model to generate a network model for determining the lesion area in the medical image.
  • the network model training method adjusts the network parameters of the prediction model and the network parameters of the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data , And adjust the network parameters of the image feature extraction model based on the coordinate information of the second lesion area in the second training data and the coordinate information of the first lesion area in the first training data.
  • the data is the purpose of the first parameter adjustment operation on the initial network model.
  • the embodiment of the present disclosure adds the step of adjusting the network parameters of the image feature extraction model based on the coordinate information of the second lesion area and the coordinate information of the first lesion area. Therefore, the embodiment of the present disclosure can further Improve the accuracy of the determined coordinate information of the lesion area, thereby improving the positioning accuracy of the lesion area.
  • Fig. 8 is a schematic structural diagram of an initial network model provided by an exemplary embodiment of the present disclosure.
  • the image feature extraction model includes a ResNext-50 network model 41 and a panoramic feature pyramid network model 42
  • the prediction model includes a prediction network model 43.
  • the type information prediction sub-model is the classification prediction module 431
  • the coordinate information prediction sub-model is the coordinate prediction module 432.
  • G represents the number of grouped convolutions.
  • MP represents the maximum pooling layer. “ ⁇ 3”, “ ⁇ 4”, “ ⁇ 6” and “ ⁇ 3” in the ResNext-50 network model 41 indicate that the module is stacked 3 times, 4 times, 6 times, and 3 times, respectively.
  • the sample image is input to the ResNext-50 network model 41 and the panoramic feature pyramid network model 42 for image feature extraction operations to output three feature layers P3, P4 and P5, and then these three feature layers P3, P4 and P5 are input to the classification prediction module 431 and the coordinate prediction module 432, respectively.
  • the sizes of the three feature layers P3, P4, and P5 are batch ⁇ 256 ⁇ 64 ⁇ 64, batch ⁇ 256 ⁇ 32 ⁇ 32, batch ⁇ 256 ⁇ 16, respectively. ⁇ 16.
  • batch represents the batch size, that is, the sample size used to calculate the gradient.
  • the feature layers P4 and P5 are up-sampled by 2 times and 4 times, respectively, and then merged with the feature layer P3 to generate a feature map with a size of batch ⁇ 768 ⁇ 64 ⁇ 64.
  • a batch ⁇ 2n matrix is obtained, where n represents the number of categories that need to be predicted, and finally the softmax classifier is used to obtain the predicted probability for each category.
  • the classification prediction module 431 will use the feature layers P3, P4, and P5 to inversely affect the network parameters of the panoramic feature pyramid network model 42, and then continue to indirectly affect the performance of the ResNext-50 network model 41 Network parameters. Since the input data of the coordinate prediction module 432 is determined based on the ResNext-50 network model 41 and the panoramic feature pyramid network model 42, the classification prediction module 431 will indirectly affect the network parameters of the coordinate prediction module 432, so as to rely on the classification prediction module 431
  • the lesion type information improves the prediction accuracy of the coordinate prediction module 432.
  • the embodiments of the present disclosure can not only reduce over-fitting, but also further improve the accuracy of the determined coordinate information of the lesion area.
  • the loss function can be used to evaluate the difference between the predicted result output by the network model and the actual result.
  • the loss function is a non-negative real-valued function, and the loss value of the loss function can characterize the prediction performance of the network model, that is, the smaller the loss value of the loss function, the better the prediction performance of the network model.
  • the purpose of the continuous iterative training process mentioned in the above embodiment is to make the loss value of the loss function as small as possible, so as to optimize the prediction performance of the network model. Therefore, the loss function is of great significance for improving the prediction performance of the network model.
  • the loss function in the type information prediction sub-model is determined based on the following calculation formula (1).
  • represents the network parameters of the type information prediction sub-model
  • m represents the number of types
  • h represents the prediction probability
  • y represents the label of each image.
  • the loss function in the type information prediction sub-model described in the above calculation formula (1) is a cross-entropy loss function. Since the cross entropy loss function includes logarithmic function information, compared with the mean square error loss function, when the training prediction result is close to the real result, the cross entropy loss function can still maintain a high gradient state, that is, the convergence speed of the network model Will not be adversely affected.
  • the loss function in the type information prediction sub-model is not limited to the loss function described in the above calculation formula (1). As long as the loss function includes logarithmic function information generated based on the predicted probability, the above embodiments can be implemented. And the beneficial effects.
  • the number of samples of each type is not exactly the same, and the number of different types may be quite different.
  • the number of different types may have large differences, such as the loss function recorded in the calculation formula (1), it may appear that the type with a large number has a larger proportion in the loss function, and the type with a small number is The proportion of the loss function is relatively small, which leads to a situation where the training effect of a small number of types is not as good as the training effect of a large number of types.
  • the cross-entropy loss function represented by the calculation formula (1) mentioned in the above-mentioned embodiment extends another embodiment of the present disclosure.
  • the loss factor corresponding to each type is correspondingly set with a weight parameter w i .
  • the weight parameter w i is determined according to the proportion of the corresponding type in the entire sample data set, and the value range is between 0 and 1.
  • the embodiment of the present disclosure achieves the purpose of further equalizing the training effect by setting a corresponding weight parameter for each type of corresponding loss factor, thereby further improving the prediction accuracy of the network model.
  • the loss function in the coordinate information prediction sub-model is determined based on the following calculation formula (2).
  • N represents the number of matched preset frames
  • x represents whether the matched frame belongs to type P
  • l represents the predicted frame
  • g represents the real frame
  • c represents the selected target belongs to type P Confidence.
  • type P may be any type, which is not limited in the embodiment of the present disclosure.
  • the loss function in the coordinate information prediction sub-model mentioned in the embodiment of the present disclosure can be applied to any prediction unit in the coordinate information prediction sub-model.
  • the loss function described in the calculation formula (2) is applied to the category prediction unit and the coordinate prediction unit of the coordinate prediction module 432, that is, the loss function and the coordinate prediction unit in the category prediction unit
  • the loss functions of are all the above calculation formula (2).
  • FIG. 9 shows the first parameter adjustment operation performed on the initial network model based on the first training data and the second training data provided by an exemplary embodiment of the present disclosure to generate a network model for determining the lesion area in a medical image Schematic diagram of the process.
  • the embodiment shown in Fig. 9 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 9 and the embodiment shown in Fig. 4, and the similarities will not be repeated. .
  • the initial network model is adjusted for the first time based on the first training data and the second training data, so as to generate the parameters used to determine the medical image.
  • the steps of the network model of the lesion area include the following steps.
  • Step 222 Perform a first parameter adjustment operation on the initial network model based on the first training data and the second training data.
  • Step 223 Determine third training data corresponding to the first training data based on the sample image and the initial network model after the first parameter adjustment operation, where the third training data includes the third lesion area coordinate information and the third lesion type information.
  • Step 224 Perform a second parameter adjustment operation on the initial network model after the first parameter adjustment operation based on the first training data and the third training data, so as to generate a network model for determining the lesion area in the medical image.
  • first determine the first training data based on the sample image then determine the initial network model, input the sample image to the initial network model to determine the second training data corresponding to the first training data, and based on the first training
  • the data and the second training data perform the first parameter adjustment operation on the initial network model, and then based on the sample image and the initial network model after the first parameter adjustment operation, the third training data corresponding to the first training data is determined, and based on The first training data and the third training data perform a second parameter adjustment operation on the initial network model after the first parameter adjustment operation is performed, so as to generate a network model for determining the lesion area in the medical image.
  • the number of parameter adjustments for the initial network model is not limited to the two mentioned in the embodiment of the present disclosure, but can also be three, four or more times, until the generated value is used to determine the value of the medical image.
  • the prediction accuracy of the network model of the lesion area only needs to meet the preset requirements.
  • the network model training method provided by the embodiments of the present disclosure achieves the purpose of performing multiple parameter adjustment operations on the initial network model. Therefore, compared with the embodiment shown in FIG. 4, the embodiment of the present disclosure can further improve the prediction accuracy of the finally generated network model for determining the lesion area in the medical image.
  • FIG. 10 is a schematic diagram of a process of determining first training data based on sample images according to an exemplary embodiment of the present disclosure.
  • the embodiment shown in Fig. 10 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 10 and the embodiment shown in Fig. 3, and the similarities will not be repeated. .
  • the step of determining the first training data based on the sample image includes the following steps.
  • Step 11 Determine the sample image and marking rules including the lesion area.
  • the marking rule is pre-determined based on humans (such as doctors).
  • the marking rule is to mark the lesion area coordinate information and the lesion type information corresponding to the lesion area in the sample image.
  • Step 12 Perform a labeling operation on the sample image based on the labeling rule to generate first training data.
  • the network model training method provided by the embodiment of the present disclosure realizes the determination of the first training data based on the sample image by determining the sample image including the lesion area and the marking rule, and performing the marking operation on the sample image based on the marking rule to generate the first training data.
  • the purpose of the training data Since the marking rule can be determined in advance based on the actual situation of the sample image, the embodiments of the present disclosure can effectively improve the flexibility of marking, thereby improving the adaptability and wide application of the trained network model for determining the lesion area in the medical image .
  • FIG. 11 is a schematic flowchart of a method for determining a lesion area provided by an exemplary embodiment of the present disclosure. As shown in FIG. 11, the method for determining a lesion area provided by an embodiment of the present disclosure includes the following steps.
  • Step 50 It is determined that the medical image of the lesion area needs to be determined.
  • Step 60 Input the medical image to a network model for determining the lesion area in the medical image, so as to determine the coordinate information of the lesion area of the medical image.
  • the network model used to determine the lesion area in the medical image mentioned in step 60 may be obtained based on the network model training method mentioned in any of the foregoing embodiments.
  • the medical image that needs to determine the lesion area is first determined, and then the medical image is input to the network model for determining the lesion area in the medical image to determine the coordinate information of the lesion area of the medical image.
  • the method for determining the focus area provided by the embodiment of the present disclosure realizes the determination by inputting the medical image that needs to determine the focus area into the network model for determining the focus area in the medical image to determine the focus area coordinate information of the medical image.
  • the purpose of the coordinate information of the lesion area in the medical image Since the method for determining the lesion area provided by the embodiment of the present disclosure is implemented based on the network model used to determine the lesion area in the medical image, compared with the existing solution, the embodiment of the present disclosure does not need to determine the medical image of the lesion area. Perform complex image enhancement, filter transformation and other processing operations, thereby avoiding the failure of predicting the coordinate information of the lesion area due to factors such as image quality. That is, the method for determining the lesion area provided by the embodiment of the present disclosure has the advantages of high stability and good robustness.
  • FIG. 12 is a schematic flowchart of a method for determining a lesion area according to another exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 12 of the present disclosure is extended on the basis of the embodiment shown in FIG. 11 of the present disclosure.
  • the following focuses on the differences between the embodiment shown in FIG. 12 and the embodiment shown in FIG. 11, and the similarities are not repeated here. .
  • the method for determining the focus area provided by the embodiment of the present disclosure, after the step of inputting the medical image into the network model for determining the focus area in the medical image to determine the focus area coordinate information of the medical image, It also includes the following steps.
  • Step 70 Perform a region division operation on the medical image to generate multiple divided regions.
  • Step 80 Determine the positional relationship between the lesion area and the multiple divided areas based on the coordinate information of the lesion area.
  • the area division operation is to generate multiple divided areas, and determine the positional relationship between the focus area and the multiple divided areas based on the focus area coordinate information.
  • the method for determining the lesion area realizes the determination of the lesion by performing an area division operation on a medical image to generate multiple divided areas, and then determining the positional relationship between the lesion area and the multiple divided areas based on the coordinate information of the lesion area.
  • the purpose of the positional relationship between the area coordinate information and the multiple divided areas Since the positional relationship between the coordinate information of the lesion area and the multiple divided areas can better realize the lesion area positioning operation, the embodiments of the present disclosure can further assist the subsequent disease diagnosis operation.
  • FIG. 13 is a schematic diagram of region division of a medical image provided by an exemplary embodiment of the present disclosure.
  • the medical image provided by the embodiment of the present disclosure is a medical image including a lung field area.
  • the medical image includes key points 1 to 16, and based on the corresponding relationship between the key points 1 to 16, a plurality of area division lines are generated, and the plurality of area division lines divide the lung field area into Multiple divided areas.
  • the lung field area is divided into the upper field inner zone, the upper field middle zone, the upper field outer zone, the middle field inner zone, the middle field middle zone, the middle field outer zone, the lower field inner zone, the lower field middle zone, and the lower field. Wilderness area.
  • the position relationship of the lesion area in the multiple divided areas may be determined based on the determined coordinate information of the lesion area, and then a structured report is generated for the doctor's reference. For example, nodules can be seen in the upper field zone of the right lung.
  • FIG. 14 is a schematic diagram of a process of performing a region division operation on a medical image to generate multiple divided regions according to an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 14 of the present disclosure is extended on the basis of the embodiment shown in FIG. 12 of the present disclosure. The following focuses on the differences between the embodiment shown in FIG. 14 and the embodiment shown in FIG. 12, and the similarities are not repeated here. .
  • performing a region division operation on a medical image to generate a plurality of division region steps includes the following steps.
  • Step 71 Input the medical image to the key point network model to determine the coordinate information set of multiple key points corresponding to the medical image, where the coordinate information set is used to perform a region division operation on the medical image.
  • the key point network model is a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • CNN convolutional Neural Networks
  • Step 72 Perform a region division operation on the medical image based on the coordinate information set to generate multiple divided regions.
  • first determine the medical image that needs to determine the focus area and then input the medical image into the network model used to determine the focus area in the medical image to determine the focus area coordinate information of the medical image, and then input the medical image To the key point network model to determine the coordinate information collection of multiple key points corresponding to the medical image, and perform the region division operation on the medical image based on the coordinate information collection to generate multiple divided regions, and finally determine the focus area based on the focus area coordinate information The positional relationship with multiple divided areas.
  • a medical image is input to a key point network model to determine the coordinate information set of multiple key points corresponding to the medical image, and then the medical image is divided into regions based on the coordinate information set.
  • the method of generating multiple divided regions achieves the purpose of performing region division operations on medical images to generate multiple divided regions. Since the region division operation mentioned in the embodiment of the present disclosure is implemented based on the key point network model, compared with the existing solution, the embodiment of the present disclosure does not need to perform complicated image enhancement and filter transformation on medical images that need to be divided into regions. And other processing operations, thereby avoiding the failure of regional division due to image quality and other factors.
  • the embodiment of the present disclosure converts the area division problem into the positioning problem of the key point coordinate information, the embodiment of the present disclosure can greatly simplify the division complexity of the area division operation.
  • FIG. 15 is a schematic flowchart of determining the positional relationship between the lesion area and multiple divided areas based on the coordinate information of the lesion area according to an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 15 of the present disclosure is extended on the basis of the embodiment shown in FIG. 12 of the present disclosure. The following focuses on the differences between the embodiment shown in FIG. 15 and the embodiment shown in FIG. 12, and the similarities are not repeated here. .
  • the step of determining the position relationship between the focus area and the multiple divided areas based on the focus area coordinate information includes the following steps.
  • Step 81 Determine the position information of the center of gravity of the lesion area based on the coordinate information of the lesion area.
  • Step 82 Determine the positional relationship between the focus area and the multiple divided areas based on the center of gravity and the positional relationship between the multiple divided areas.
  • the position information that defines the center of gravity of the lesion area is the position information of the lesion area.
  • the medical image In the actual application process, first determine the medical image that needs to determine the focus area, and then input the medical image into the network model used to determine the focus area in the medical image to determine the focus area coordinate information of the medical image, and then perform the medical image
  • the area division operation is to generate multiple divided areas, and finally the location information of the center of gravity of the focus area is determined based on the coordinate information of the focus area, and the location relationship between the focus area and the multiple divided areas is determined based on the location relationship between the center of gravity and the multiple divided areas.
  • the method for determining the focus area is realized by determining the position information of the center of gravity of the focus area, and determining the position information of the focus area and the multiple divided areas based on the position relationship between the center of gravity of the focus area and the multiple divided areas
  • the purpose of determining the positional relationship between the focus area of the medical image and the multiple divided areas is achieved. Due to the ever-changing shape and volume of the lesion area, the embodiments of the present disclosure effectively ensure the positioning accuracy of the lesion area by determining the relative position information of the lesion area based on the center of gravity of the lesion area.
  • FIG. 16 is a schematic diagram of positioning the lesion area of the medical image including the lung field area based on the area division shown in FIG. 13.
  • the left lung field area and the right lung field area in the medical image can be framed.
  • the first connection line is generated based on the first direction key points 1 and 2
  • the second connection line is generated based on the first direction key points 3 and 4
  • the third connection line is generated based on the second direction key points 9 and 10.
  • the two-direction key points 11 and 12 generate a fourth connecting line.
  • the first connecting line to the fourth connecting line can jointly form the left lung field area Contour lines.
  • the connecting lines formed based on the first direction key points 5, 6, 7, and 8 and the second direction key points 13, 14, 15 and 16 can form a contour line that defines the right lung field area.
  • the specific connection method can refer to the above-mentioned connection method for the contour line of the left lung field area, which is not repeated in the embodiment of the present disclosure.
  • the lesion area in the medical image includes the lesion area M and the lesion area N, wherein the lesion area M has a regular boundary, and the boundary of the lesion area M is presented in the medical image It is a rectangular frame, and the center of gravity of the focus area M is m; the focus area N has an irregular boundary, the boundary of the focus area N appears as an irregular polygon box in the medical image, and the center of gravity of the focus area N is n.
  • the location information of the focus area can be determined based on the location information of the center of gravity of the focus area. For example, since the center of gravity m of the lesion area M is located in the middle zone of the left lung field area, it can be determined that the lesion area M is located in the middle field zone of the left lung field area. Exemplarily, in the process of assisting the doctor in the diagnosis, it can be described as "the left lung field area with a visible lesion area M in the middle field". For another example, since the center of gravity n of the lesion area N is located in the upper field mid zone of the right lung field area, it can be determined that the lesion area N is located in the upper field mid zone of the right lung field area. Exemplarily, in the process of assisting the doctor in the diagnosis, it can be described as "the area with visible lesions in the upper field of the right lung field area N".
  • Fig. 17 is a schematic structural diagram of a network model training device provided by an exemplary embodiment of the present disclosure. As shown in FIG. 17, the network model training device provided by the embodiment of the present disclosure includes:
  • the first training data determining module 100 is configured to determine first training data based on a sample image, where the sample image includes a lesion area, and the first training data includes labeled first lesion area coordinate information and first lesion type information;
  • the training module 200 is used to determine an initial network model, and to train the initial network model based on the sample image to generate a network model for determining the lesion area in the medical image.
  • Fig. 18 is a schematic structural diagram of a training module provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 18 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 18 and the embodiment shown in FIG. 17, and the similarities are not repeated here. .
  • the training module 200 includes:
  • the second training data determining unit 210 is configured to input sample images into the initial network model to determine second training data corresponding to the first training data, where the second training data includes coordinate information of the second lesion area and the second lesion Type information;
  • the training unit 220 is configured to perform a first parameter adjustment operation on the initial network model based on the first training data and the second training data to generate a network model for determining the focus area in the medical image.
  • FIG. 19 is a schematic structural diagram of a training module provided by another exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 19 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 19 and the embodiment shown in FIG. 18, and the similarities are not repeated here. .
  • the training unit 220 includes:
  • the first training subunit 2210 is configured to adjust the network parameters of the prediction model and the network parameters of the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data to generate A network model used to determine the lesion area in a medical image.
  • Fig. 20 is a schematic structural diagram of a first training subunit provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in Fig. 20 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in Fig. 20 and the embodiment shown in Fig. 19, and the similarities will not be repeated. .
  • the first training subunit 2210 includes:
  • the first network parameter adjustment subunit 22110 is configured to adjust the network parameters of the type information prediction sub-model in the prediction model based on the second lesion type information and the first lesion type information;
  • the second network parameter adjustment subunit 22120 is configured to adjust the network parameters of the image feature extraction model based on the adjusted type information prediction submodel;
  • the third network parameter adjustment subunit 22130 is configured to adjust the network parameters of the coordinate information prediction submodel in the prediction model based on the adjusted image feature extraction model.
  • FIG. 21 is a schematic structural diagram of a training module provided by another exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 21 of the present disclosure is extended on the basis of the embodiment shown in FIG. 19 of the present disclosure.
  • the following focuses on the differences between the embodiment shown in FIG. 21 and the embodiment shown in FIG. 19, and the similarities are not repeated here. .
  • the first training subunit 2210 includes:
  • the second training subunit 22140 is used to adjust the network parameters of the prediction model and the network parameters of the image feature extraction model based on the second lesion type information in the second training data and the first lesion type information in the first training data, based on the first Second, the coordinate information of the second lesion area in the training data and the coordinate information of the first lesion area in the first training data adjust the network parameters of the image feature extraction model to generate a network model for determining the lesion area in the medical image.
  • Fig. 22 is a schematic structural diagram of a training unit provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 22 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 22 and the embodiment shown in FIG. 18, and the similarities are not repeated here. .
  • the training unit 220 includes:
  • the first parameter adjustment subunit 2220 is configured to perform the first parameter adjustment operation on the initial network model based on the first training data and the second training data;
  • the third training data determining subunit 2230 is configured to determine third training data corresponding to the first training data based on the sample image and the initial network model after the first parameter adjustment operation, where the third training data includes a third lesion Regional coordinate information and third lesion type information;
  • the second parameter adjustment subunit 2240 is configured to perform a second parameter adjustment operation on the initial network model after the first parameter adjustment operation is performed based on the first training data and the third training data, so as to generate a second parameter adjustment operation for determining the medical image Network model of the lesion area.
  • FIG. 23 is a schematic structural diagram of a first training data determining module provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 23 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 23 and the embodiment shown in FIG. 17, and the similarities will not be repeated. .
  • the first training data determining module 100 includes:
  • the determining unit 110 is configured to determine the sample image and marking rules including the lesion area
  • the first training data generating unit 120 is configured to perform a labeling operation on the sample image based on the labeling rule to generate the first training data.
  • FIG. 24 is a schematic structural diagram of a device for determining a lesion area provided by an exemplary embodiment of the present disclosure. As shown in FIG. 24, the device for determining a lesion area provided by an embodiment of the present disclosure includes:
  • the image determining module 500 is used to determine the medical image that needs to determine the lesion area
  • the lesion area determination module 600 is used to input the medical image into a network model for determining the lesion area in the medical image, so as to determine the lesion area coordinate information of the medical image.
  • FIG. 25 is a schematic structural diagram of an apparatus for determining a focus area provided by another exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 25 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 25 and the embodiment shown in FIG. 24, and the similarities are not repeated here. .
  • the device for determining a lesion area further includes:
  • the divided region generating module 700 is configured to perform a region dividing operation on the medical image to generate multiple divided regions;
  • the position relationship determination module 800 is configured to determine the position relationship between the lesion area and multiple divided areas based on the coordinate information of the lesion area.
  • FIG. 26 is a schematic structural diagram of a divided area generating module provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 26 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 26 and the embodiment shown in FIG. 25, and the similarities will not be repeated here. .
  • the divided area generating module 700 includes:
  • the coordinate information set determining unit 710 is configured to input the medical image into the key point network model to determine the coordinate information set of multiple key points corresponding to the medical image, where the coordinate information set is used to perform region division operations on the medical image;
  • the region dividing unit 720 is configured to perform a region dividing operation on the medical image based on the coordinate information set to generate multiple divided regions.
  • Fig. 27 is a schematic structural diagram of a position relationship determining module provided by an exemplary embodiment of the present disclosure.
  • the embodiment shown in FIG. 27 of the present disclosure is extended. The following focuses on the differences between the embodiment shown in FIG. 27 and the embodiment shown in FIG. 25, and the similarities will not be repeated. .
  • the position relationship determining module 800 includes:
  • the center of gravity determination unit 810 is configured to determine the position information of the center of gravity of the lesion area based on the coordinate information of the lesion area;
  • the position relationship determining unit 820 is configured to determine the position relationship between the lesion area and the multiple divided areas based on the position relationship between the center of gravity and the multiple divided areas.
  • the image determining module 500, the lesion area determining module 600, the divided area generating module 700, and the position relationship determining module 800 in the lesion area determining device provided in FIGS. 24 to 27, and the divided area generating module 700 include
  • the operations and functions of the coordinate information set determining unit 710 and the area dividing unit 720, as well as the center of gravity determining unit 810 and the position relationship determining unit 820 included in the position relationship determining module 800 can refer to the lesion area determination methods provided in FIGS. 11 to 15. In order to avoid repetition, I won't repeat it here.
  • FIG. 28 is a schematic structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
  • the electronic device 90 includes one or more processors 901 and a memory 902.
  • the processor 901 may be a central processing unit (CPU) or another form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 90 to perform desired functions.
  • CPU central processing unit
  • the processor 901 may be a central processing unit (CPU) or another form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 90 to perform desired functions.
  • the memory 902 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 11 may run the program instructions to implement the network model training method and focus area of the various embodiments of the application described above. Determine the method and/or other desired functions.
  • Various contents such as medical images can also be stored in the computer-readable storage medium.
  • the electronic device 90 may further include: an input device 903 and an output device 904, and these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 903 may include, for example, a keyboard, a mouse, and so on.
  • the output device 904 can output various information to the outside, including information on the determined lesion area and so on.
  • the output device 904 may include, for example, a display, a speaker, a printer, a communication network and a remote output device connected to it, and so on.
  • the electronic device 90 may also include any other appropriate components.
  • the embodiments of the present application may also be computer program products, which include computer program instructions that, when run by a processor, cause the processor to execute the above-mentioned description of the present specification according to the present application. Steps in the network model training method and the method for determining the lesion area of various embodiments.
  • the computer program product may use any combination of one or more programming languages to write program codes for performing the operations of the embodiments of the present application.
  • the programming languages include object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the embodiments of the present application may also be a computer-readable storage medium, on which computer program instructions are stored.
  • the processor executes the above-mentioned descriptions of the present specification according to the various aspects of the present application.
  • the computer-readable storage medium may adopt any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above, for example. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • each component or each step can be decomposed and/or recombined.
  • decompositions and/or recombinations shall be regarded as equivalent solutions of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种网络模型训练方法包括:基于样本图像确定第一训练数据,其中,样本图像包括病灶区域,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息(10);确定初始网络模型,并基于样本图像训练所述初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型(20)。该方法能够提高病灶区域的确定效率以及确定精准度。

Description

网络模型训练方法及装置、病灶区域确定方法及装置 技术领域
本公开涉及图像处理技术领域,具体涉及网络模型训练方法及装置、病灶区域确定方法及装置、计算机可读存储介质及电子设备。
发明背景
随着医学影像技术和图像处理技术的快速发展,基于医学影像进行的病灶检测、病灶定位以及病灶分类等操作日益成为预防及治疗疾病的重要手段。在疾病的诊断过程中,快速且精准地确定医学影像中的病灶区域的具体位置是进行疾病诊断操作的基础前提,其重要性不言而喻。然而在现有技术中,仍需依赖人工(比如医生)进行病灶区域的确定操作,效率低下,而且可能存在精度不高的问题。
因此,如何辅助医生进行病灶区域的确定以提高确认病灶的效率和精度是亟待解决的问题。
发明内容
为了解决上述技术问题,提出了本申请。本申请的实施例提供了一种网络模型训练方法及装置、病灶区域确定方法及装置、计算机可读存储介质及电子设备。
在一方面,本公开实施例提供了一种网络模型训练方法,该网络模型训练方法包括:基于样本图像确定第一训练数据,其中,样本图像包括病灶区域,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息;确定初始网络模型,并基于样本图像训练所述初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
在另一方面,本公开实施例提供了一种病灶区域确定方法,该病灶区域确定方法包括:确定需要确定病灶区域的医学图像;将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息,其中,用于确定医学图像中的病灶区域的网络模型可基于上述实施例提及的网络模型训练方法获得。
在另一方面,本公开实施例提供了一种网络模型训练装置,该网络模型训练装置包括:第一训练数据确定模块,用于基于样本图像确定第一训练数据,其中,样本图像包括病灶区域,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息;训练模块,用于确定初始网络模型,并基于样本图像训练所述初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
在另一方面,本公开实施例提供了一种病灶区域确定装置,该病灶区域确定装置包括:图像确定模块,用于确定需要确定病灶区域的医学图像;病灶区域确定模块,用于将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息,其中,用于确定医学图像中的病灶区域的网络模型可基于上述实施例提及的网络模型训练方法获得。
在另一方面,本公开实施例提供了一种计算机可读存储介质,该存储介质存储有计算机程序,该计算机程序用于执行上述实施例所提及的网络模型训练方法,或执行上述实施例所提及的病灶区域确定方法。
在另一方面,本公开实施例提供了一种电子设备,该电子设备包括:处理器和用于存储处理器可执行指令的存储器,其中,处理器用于执行上述实施例所提及的网络模型训练方法,或执行上述实施例所提及的病灶区域确定方法。
本公开实施例提供的网络模型训练方法,通过基于样本图像确定第一训练数据,然后确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的方式,实现了利用样本图像训练初始网络模型以生成用于确定医学图像中的病灶区域的网络模型的目的。由于样本图像为包括病灶区域的医学图像,基于样本图像确定的第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息,因此,基于样本图像训练生成的用于确定医学图像中的病灶区域的网络模型能够用于辅助医生确定任一与样本图像同类型的医学图像中的病灶区域。综上,利用本公开实施例提供的网络模型训练方法训练生成的用于确定医学图像中的病灶区域的网络模型,能够辅助医生确定与样本图像同类型的医学图像中的病灶区域(比如确定病灶区域对应的病灶区域坐标信息),因此,与现有技术相比,本公开实施例能够有效提高病灶区域的确定效率以及确定精准度。
本公开实施例提供的病灶区域确定方法,通过将需要确定病灶区域的医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息的方式,实现了确定医学图像中的病灶区域坐标信息的目的。由于本公开实施例提供的病灶区域确定方法是基于用于确定医学图像中的病灶区域的网络模型实现的,因此,与现有方案相比,本公开实施例无需对需要确定病灶区域的医学图像进行复杂的图像增强、滤波变换等处理操作,进而避免了因图像质量等因素导致的病灶区域坐标信息预测失败等情况。即,本公开实施例所提供的病灶区域确定方法具备稳定性高、鲁棒性好等优势。
附图简要说明
通过结合附图对本申请实施例进行更详细的描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1所示为本公开实施例所适用的一场景示意图。
图2所示为本公开实施例所适用的另一场景示意图。
图3所示为本公开一示例性实施例提供的网络模型训练方法的流程示意图。
图4所示为本公开一示例性实施例提供的基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。
图5所示为本公开另一示例性实施例提供的基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。
图6所示为本公开一示例性实施例提供的基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数的流程示意图。
图7所示为本公开又一示例性实施例提供的基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。
图8所示为本公开一示例性实施例提供的初始网络模型的结构示意图。
图9所示为本公开一示例性实施例提供的基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。
图10所示为本公开一示例性实施例提供的基于样本图像确定第一训练数据的流程示意图。
图11所示为本公开一示例性实施例提供的病灶区域确定方法的流程示意图。
图12所示为本公开另一示例性实施例提供的病灶区域确定方法的流程示意图。
图13所示为本公开一示例性实施例提供的医学图像的区域划分示意图。
图14所示为本公开一示例性实施例提供的对医学图像进行区域划分操作,以生成多个划分区域的流程示意图。
图15所示为本公开一示例性实施例提供的基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系的流程示意图。
图16所示为基于图13所示区域划分情况对包括肺野区域的医学图像的病灶区域进行定位的定位示意图。
图17所示为本公开一示例性实施例提供的网络模型训练装置的结构示意图。
图18所示为本公开一示例性实施例提供的训练模块的结构示意图。
图19所示为本公开另一示例性实施例提供的训练模块的结构示意图。
图20所示为本公开一示例性实施例提供的第一训练子单元的结构示意图。
图21所示为本公开又一示例性实施例提供的训练模块的结构示意图。
图22所示为本公开一示例性实施例提供的训练单元的结构示意图。
图23所示为本公开一示例性实施例提供的第一训练数据确定模块的结构示意图。
图24所示为本公开一示例性实施例提供的病灶区域确定装置的结构示意图。
图25所示为本公开另一示例性实施例提供的病灶区域确定装置的结构示意图。
图26所示为本公开一示例性实施例提供的划分区域生成模块的结构示意图。
图27所示为本公开一示例性实施例提供的位置关系确定模块的结构示意图。
图28所示为本公开一示例性实施例提供的电子设备的结构示意图。
实施本发明的方式
下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。
医学影像是借助某种介质(如X射线、电磁场、超声波等)与人体或动物体相互作用,把人体或动物体内部组织器官结构、密度等信息以影像方式呈现的图像。其中,医学影像主要分为描述生理形态的解剖结构图像和描述人体或动物体功能或代谢功能的功能图像。在现代医学中,医学影像是疾病预防与治疗的重要工具。
众所周知,描述生理形态的解剖结构图像主要包括X线图像、计算机断层扫描(Computed Tomography,CT)图像和磁共振图像(Magnetic Resonance Imaging,MRI)等。其中,基于X线图像的成像原理,又可分为计算机X线摄影(Computed Radiography,CR)和数字X线摄影(Digital Radiography,DR)。在实际的疾病诊断过程中,描述生理形态的解剖结构图像能够清楚显示组织器官的形态与病变情况,有助于确定组织器官中的病灶位置信息以及病灶类型信息,进而为精准地给出疾病治疗方案提供前提条件。
人体或动物体的组织器官的自身结构较为复杂,且针对不同的组织器官,病变过程中的病灶区域形态与位置等情况均能够直接或间接影响相关疾病的诊断结果。比如,肺结核是由结核分枝杆菌引起的慢性传染病,可侵及许多脏器,其危害性不言而喻。然而,肺结核的结核类型包括原发性肺结核、血行播散型肺结核、继发性肺结核、气管支气管肺结核、结核性胸膜炎和陈旧性肺结核六种,不同类型的结核在图像上的特征也是 多种多样,且病灶区域的位置信息对于确定结核类型具有重要意义。然而在现有技术中,仍需依赖人工(比如医生)进行病灶区域的确定操作(比如确定病灶区域的位置信息),因此,确定效率比较低且精准度较差。
基于上述提及的技术问题,本公开的基本构思是提出一种网络模型训练方法及装置、病灶区域确定方法及装置、计算机可读存储介质及电子设备。
该网络模型训练方法通过基于样本图像确定第一训练数据,然后确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的方式,实现了利用样本图像训练初始网络模型以生成用于确定医学图像中的病灶区域的网络模型的目的。由于样本图像为包括病灶区域的医学图像,基于样本图像确定的第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息,因此,基于样本图像训练生成的用于确定医学图像中的病灶区域的网络模型能够用于辅助医生确定任一与样本图像同类型的医学图像中的病灶区域。综上,利用本公开实施例提供的网络模型训练方法训练生成的用于确定医学图像中的病灶区域的网络模型,能够确定与样本图像同类型的医学图像中的病灶区域(比如确定病灶区域对应的病灶区域坐标信息),因此,与现有技术相比,本公开实施例能够有效提高病灶区域的确定效率以及确定精准度。
该病灶区域确定方法通过将需要确定病灶区域的医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息的方式,实现了确定医学图像中的病灶区域坐标信息的目的。由于本公开实施例提供的病灶区域确定方法是基于用于确定医学图像中的病灶区域的网络模型实现的,因此,与现有方案相比,本公开实施例无需对需要确定病灶区域的医学图像进行复杂的图像增强、滤波变换等处理操作,进而避免了因图像质量等因素导致的病灶区域坐标信息预测失败等情况。即,本公开实施例所提供的病灶区域确定方法具备稳定性高、鲁棒性好等优势。
在介绍了本申请的基本原理之后,下面将参考附图来具体介绍本申请的各种非限制性实施例。
图1所示为本公开实施例所适用的一场景示意图。如图1所示,本公开实施例所适用的场景中包括服务器1和图像采集设备2,其中,服务器1和图像采集设备2之间存在通信连接关系。
具体而言,图像采集设备2用于采集包括病灶区域的医学图像以作为样本图像,服务器1用于基于图像采集设备2采集的样本图像确定第一训练数据,然后确定初始网络模型,并基于样本图像训练该初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型,其中,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息。即,该场景实现了一种网络模型训练方法。
或者,图像采集设备2用于采集需要确定病灶区域的医学图像,服务器1用于将图像采集设备2采集的医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息。即,该场景实现了一种病灶区域确定方法。其中,用于确定医学图像中的病灶区域的网络模型可为上述场景中生成的用于确定医学图像中的病灶区域的网络模型。由于图1所示的上述场景利用服务器1实现了网络模型训练方法和/或病灶区域确定方法,因此,不但能够提高场景的适应能力,而且能够有效降低图像采集设备2的计算量。
需要说明的是,本公开还适用于另一场景。图2所示为本公开实施例所适用的另一场景示意图。具体地,该场景中包括图像处理设备3,并且,图像处理设备3中包括图像采集模块31和计算模块32。
具体而言,图像处理设备3中的图像采集模块31用于采集包括病灶区域的医学图像以作为样本图像,图像处理设备3中的计算模块32用于基于图像采集模块31采集的样本图像确定第一训练数据,然后确定初始网络模型,并基于样本图像训练该初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型,其中,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息。即,该场景实现了一种网络模型训练方法。
或者,图像处理设备3中的图像采集模块31用于采集需要确定病灶区域的医学图像,图像处理设备3中的计算模块32用于将图像采集模块31采集的医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息。即,该场景实现了一种病灶区域确定方法。其中,用于确定医学图像中的病灶区域的网络模型可为上述场景中生成的用于确定医学图像中的病灶区域的网络模型。由于图2所示的上述场景利用图像处理设备3实现了网络模型训练方法和/或病灶区域确定方法,无需与服务器等相关装置进行数据传输操作,因此,上述场景能够保证网络模型训练方法或病灶区域确定方法的实时性。
需要说明的是,上述场景中提及的图像采集设备2和图像采集模块31,包括但不限于为X线机、CT扫描仪、MRI设备等图像采集装置。对应地,上述场景中提及的图像采集设备2和图像采集模块31所采集的医学图像,包括但不限于为X线图像、CT图像、MRI图像等能够将人体或动物体内部组织器官结构、密度等信息以影像方式呈现的医学图像。此外,应当理解,本公开实施例提供的网络模型训练方法和病灶区域确定方法,不局限于上述提及的医学图像的适用场景,只要涉及到基于特征区域确定的应用场景,均属于本公开实施例的适用范围。比如,监控图像中的感兴趣区域的确定场景。
图3所示为本公开一示例性实施例提供的网络模型训练方法的流程示意图。如图3所示,本公开实施例提供的网络模型训练方法包括如下步骤。
步骤10,基于样本图像确定第一训练数据,其中,样本图像包括病灶区域,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息。
在本公开实施例中,样本图像为包括病灶区域的医学图像。
步骤20,确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
示例性地,初始网络模型为卷积神经网络(Convolutional Neural Networks,CNN)模型。
可选地,初始网络模型和用于确定医学图像中的病灶区域的网络模型的模型结构是相同的,初始网络模型和用于确定医学图像中的病灶区域的网络模型之间的差异为模型的网络参数差异。即,初始网络模型中的 网络参数为初始网络参数,然后利用样本图像对初始网络模型进行训练,训练过程中会调整初始网络参数,以最终生成用于确定医学图像中的病灶区域的网络模型中的网络参数。比如,基于梯度下降法不断调节初始网络模型的网络参数,以最终生成用于确定医学图像中的病灶区域的网络模型中的网络参数。
在实际应用过程中,首先基于样本图像确定第一训练数据,然后确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
本公开实施例提供的网络模型训练方法,通过基于样本图像确定第一训练数据,然后确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的方式,实现了利用样本图像训练初始网络模型以生成用于确定医学图像中的病灶区域的网络模型的目的。由于样本图像为包括病灶区域的医学图像,基于样本图像确定的第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息,因此,基于样本图像训练生成的用于确定医学图像中的病灶区域的网络模型能够用于辅助医生确定任一与样本图像同类型的医学图像中的病灶区域。综上,利用本公开实施例提供的网络模型训练方法训练生成的用于确定医学图像中的病灶区域的网络模型,能够辅助医生确定与样本图像同类型的医学图像中的病灶区域(比如确定病灶区域对应的病灶区域坐标信息),因此,与现有技术相比,本公开实施例能够有效提高病灶区域的确定效率以及确定精准度。
需要说明的是,上述提及的与样本图像同类型的医学图像,指的是医学图像包括的组织器官与样本图像中的组织器官属于同种类型。比如,样本图像为包括人体肺野区域的胸片图像,那么,医学图像同样为包括人体肺野区域的胸片图像。又比如,样本图像为包括人体脑部区域的头部图像,那么,医学图像同样为包括人体脑部区域的头部图像。
示例性地,在本公开一实施例中,样本图像为包括肺结核病灶区域的肺部图像,第一病灶类型信息包括原发性肺结核、血行播散型肺结核、继发性肺结核、气管支气管肺结核、结核性胸膜炎、陈旧性肺结核中的至少一种。由此,能够借助上述实施例确定的用于确定医学图像中的病灶区域的网络模型实现预测包括肺结核病灶区域的医学图像的病灶区域坐标信息的目的。
图4所示为本公开一示例性实施例提供的基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。在本公开图3所示实施例的基础上延伸出本公开图4所示实施例,下面着重叙述图4所示实施例与图3所示实施例的不同之处,相同之处不再赘述。
如图4所示,在本公开实施例提供的网络模型训练方法中,基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型步骤,包括如下步骤。
步骤21,将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,其中,第二训练数据包括第二病灶区域坐标信息和第二病灶类型信息。
第二训练数据指的是将样本图像输入至初始网络模型后,初始网络模型确定的与样本图像对应的训练数据(其中,该训练数据中包括病灶区域坐标信息和病灶类型信息)。
步骤22,基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
第一次参数调整操作所调整的具体的网络参数可根据实际情况确定,包括但不限于学习率、图像尺寸等。
在实际应用过程中,首先基于样本图像确定第一训练数据,然后确定初始网络模型,将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,并基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
本公开实施例提供的网络模型训练方法,通过将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,然后基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型的方式,实现了基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的目的。由于第一训练数据是预先标记的,第二训练数据是基于初始网络模型确定的,因此,第一训练数据和第二训练数据之间的差异能够表征初始网络模型的预测精准度。基于此,在本公开实施例中,基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作后,能够有效缩小第一训练数据和第二训练数据之间的误差范围,因此,本公开实施例能够有效提高最终生成的用于确定医学图像中的病灶区域的网络模型的预测精准度。
图5所示为本公开另一示例性实施例提供的基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。在本公开图4所示实施例的基础上延伸出本公开图5所示实施例,下面着重叙述图5所示实施例与图4所示实施例的不同之处,相同之处不再赘述。
在图5所示实施例中,初始网络模型包括信号连接的图像特征提取模型和预测模型,其中,图像特征提取模型用于提取医学图像的图像特征信息,预测模型用于预测与医学图像对应的训练数据。那么,如图5所示,在本公开实施例提供的网络模型训练方法中,基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型步骤,包括如下步骤。
步骤221,基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
应当理解,本公开实施例中提及的图像特征提取模型和预测模型的具体模型结构,可根据实际情况确定。
在实际应用过程中,首先基于样本图像确定第一训练数据,然后确定初始网络模型,将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,然后基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
本公开实施例提供的网络模型训练方法,通过基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数的方式,实现了基于第一训 练数据和第二训练数据对初始网络模型进行第一次参数调整操作的目的。由于病灶类型信息能够有助于确定病灶区域坐标信息,因此,本公开实施例能够进一步提高所确定的病灶区域坐标信息的精准度,进而提高病灶区域的定位精准度。
图6所示为本公开一示例性实施例提供的基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数的流程示意图。在本公开图5所示实施例的基础上延伸出本公开图6所示实施例,下面着重叙述图6所示实施例与图5所示实施例的不同之处,相同之处不再赘述。
在图6所示实施例中,预测模型包括坐标信息预测子模型和类型信息预测子模型,其中,坐标信息预测子模型用于预测病灶区域坐标信息,类型信息预测子模型用于预测病灶类型信息。如图6所示,在本公开实施例提供的网络模型训练方法中,基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数步骤,包括如下步骤。
步骤2211,基于第二病灶类型信息和第一病灶类型信息调整预测模型中的类型信息预测子模型的网络参数。
由于第二病灶类型信息是基于预测模型中的类型信息预测子模型确定的,因此,基于第二病灶类型信息和预先标记的第一病灶类型信息能够调整预测模型中的类型信息预测子模型的网络参数,以进一步提高类型信息预测子模型的预测精度。
步骤2212,基于调整后的类型信息预测子模型调整图像特征提取模型的网络参数。
由于调整后的类型信息预测子模型的预测精度有所提高,且所确定的高精准度的病灶类型信息是基于图像特征提取模型提取的图像特征信息获得的,因此,基于调整后的类型信息预测子模型调整图像特征提取模型的网络参数操作,能够进一步提高图像特征提取模型所提取的图像特征信息的精准度。
步骤2213,基于调整后的图像特征提取模型调整预测模型中的坐标信息预测子模型的网络参数。
由于调整后的图像特征提取模型的预测精准度有所提高,且预测模型中的坐标信息预测子模型是以图像特征提取模型所预测的图像特征信息作为输入数据的,因此,基于调整后的图像特征提取模型调整预测模型中的坐标信息预测子模型的网络参数操作,能够进一步提高坐标信息预测子模型所确定的病灶区域坐标信息的精准度。
应当理解,本公开实施例中提及的坐标信息预测子模型和类型信息预测子模型的具体模型结构,亦可根据实际情况确定。
在实际应用过程中,首先基于样本图像确定第一训练数据,然后确定初始网络模型,将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,然后基于第二病灶类型信息和第一病灶类型信息调整预测模型中的类型信息预测子模型的网络参数,并基于调整后的类型信息预测子模型调整图像特征提取模型的网络参数,以及基于调整后的图像特征提取模型调整预测模型中的坐标信息预测子模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
本公开实施例提供的网络模型训练方法,通过基于第二病灶类型信息和第一病灶类型信息调整预测模型中的类型信息预测子模型的网络参数,然后基于调整后的类型信息预测子模型调整图像特征提取模型的网络参数,并基于调整后的图像特征提取模型调整预测模型中的坐标信息预测子模型的网络参数的方式,实现了基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数的目的。结合上述实施例分析内容可知,本公开实施例能够进一步提高所确定的病灶区域坐标信息的精准度。
图7所示为本公开又一示例性实施例提供的基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。在本公开图5所示实施例的基础上延伸出本公开图7所示实施例,下面着重叙述图7所示实施例与图5所示实施例的不同之处,相同之处不再赘述。
如图7所示,在本公开实施例提供的网络模型训练方法中,基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型步骤,包括如下步骤。
步骤2214,基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,基于第二训练数据中的第二病灶区域坐标信息和第一训练数据中的第一病灶区域坐标信息调整图像特征提取模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
在实际应用过程中,首先基于样本图像确定第一训练数据,然后确定初始网络模型,将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,然后基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,基于第二训练数据中的第二病灶区域坐标信息和第一训练数据中的第一病灶区域坐标信息调整图像特征提取模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
本公开实施例提供的网络模型训练方法,通过基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,并基于第二训练数据中的第二病灶区域坐标信息和第一训练数据中的第一病灶区域坐标信息调整图像特征提取模型的网络参数的方式,实现了基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作的目的。由于与图5所示实施例相比,本公开实施例增加了基于第二病灶区域坐标信息和第一病灶区域坐标信息调整图像特征提取模型的网络参数的步骤,因此,本公开实施例能够进一步提高所确定的病灶区域坐标信息的精准度,进而提高病灶区域的定位精准度。
为进一步详细说明上述实施例中提及的用于确定医学图像中的病灶区域的网络模型的具体结构,下面基于图8所示的结构示意图进行举例说明。
图8所示为本公开一示例性实施例提供的初始网络模型的结构示意图。如图8所示,在本公开实施例中,图像特征提取模型包括ResNext-50网络模型41和全景特征金字塔网络模型42,预测模型包括预测网络模型43。其中,类型信息预测子模型为分类预测模块431,坐标信息预测子模型为坐标预测模块432。
继续参照图8所示,G表征分组卷积数,在未标识G数值的模块结构中,默认G=1。S表征卷积步长,在未标识S数值的模块结构中,默认S=1。MP表示最大值池化层。ResNext-50网络模型41中的“×3”、“×4”、“×6”和“×3”分别表示该模块重复叠堆3次、4次、6次和3次。
在实际训练过程中,将样本图像输入至ResNext-50网络模型41和全景特征金字塔网络模型42进行图像特征提取操作,以输出三层特征层P3,P4和P5,然后这三层特征层P3,P4和P5分别输入至分类预测模块431和坐标预测模块432。在本公开实施例中,若样本图像尺寸为512×512,三层特征层P3,P4和P5的尺寸分别为batch×256×64×64,batch×256×32×32,batch×256×16×16。其中,batch表示批尺寸,即计算梯度所使用的样本量。
在分类预测模块431中,特征层P4和P5分别经过2倍和4倍的上采样后,再与特征层P3进行融合操作以生成一尺寸为batch×768×64×64特征图,该特征图再经过一系列的卷积和池化操作后,得到batch×2n的矩阵,其中,n表示需要预测的类别数量,最后用softmax分类器得到针对每一类别的预测概率。
需要说明的是,在不断迭代的训练过程中,分类预测模块431会利用特征层P3,P4和P5反向影响全景特征金字塔网络模型42的网络参数,进而继续间接影响ResNext-50网络模型41的网络参数。由于坐标预测模块432的输入数据是基于ResNext-50网络模型41和全景特征金字塔网络模型42确定的,因此,分类预测模块431会间接影响坐标预测模块432的网络参数,从而借助分类预测模块431的病灶类型信息提高坐标预测模块432的预测精准度。综上,本公开实施例不仅能够降低过拟合,而且能够进一步提高所确定的病灶区域坐标信息的精准度。
应当理解,损失函数可以用来评价网络模型输出的预测结果与真实结果之间的差异。损失函数是非负实值函数,损失函数的损失值能够表征网络模型的预测性能,即,损失函数的损失值越小,网络模型的预测性能越好。上述实施例提及的不断迭代的训练过程,目的是为了让损失函数的损失值尽可能小,以优化网络模型的预测性能。因此,损失函数对于提高网络模型的预测性能具有重要意义。
基于此,在本公开一实施例中,类型信息预测子模型中的损失函数基于下述计算式(1)确定。
Figure PCTCN2020092570-appb-000001
在计算式(1)中,θ表征类型信息预测子模型的网络参数,m表征类型数量,h表征预测概率,y表征每张图像的标签。
上述计算式(1)中记载的类型信息预测子模型中的损失函数为交叉熵损失函数。由于交叉熵损失函数中包括对数函数信息,因此,与均方差损失函数相比,当训练的预测结果接近真实结果时,交叉熵损失函数仍然可以保持在高梯度状态,即网络模型的收敛速度不会受到不良影响。
需要说明的是,类型信息预测子模型中的损失函数不局限于上述计算式(1)记载的损失函数,只要损失函数中包括基于预测概率生成的对数函数信息,就能够实现上述实施例提及的有益效果。
通常情况下,每种类型的样本数量并非是完全相同的,不同种类型的数量可能存在较大差异。当不同种类型的数量可能存在较大差异时,如采用计算式(1)记载的损失函数进行计算,则可能出现数量多的类型在损失函数中所占的比重比较大,数量少的类型在损失函数中所占的比重比较小,进而导致对数量少的类型的训练效果没有数量多的类型的训练效果好的情况。基于此,更优选地,基于上述实施例提及的计算式(1)表征的交叉熵损失函数延伸出本公开另一实施例。在本公开实施例的损失函数中,每一类型对应的损失因子均对应设置有一权重参数w i。示例性地,权重参数w i根据所对应的类型在整个样本数据集中所占的比例确定,且数值范围在0到1之间。
需要说明的是,本公开实施例通过为每一类型对应的损失因子设置一对应的权重参数的方式,实现了进一步均衡训练效果的目的,进而进一步提高了网络模型的预测精准度。
在本公开另一实施例中,坐标信息预测子模型中的损失函数基于下述计算式(2)确定。
Figure PCTCN2020092570-appb-000002
在计算式(2)中,N表征匹配的预设框的数量,x表征匹配了的框是否属于类型P,l表征预测框,g表征真实框,c表征所框选的目标属于类型P的置信度。应当理解,类型P可以为任一类型,本公开实施例对此不进行限定。
需要说明的是,本公开实施例提及的坐标信息预测子模型中的损失函数,可以应用到坐标信息预测子模型中的任一预测单元。比如,在图8所示实施例中,将计算式(2)中记载的损失函数应用至坐标预测模块432的类别预测单元和坐标预测单元,即类别预测单元中的损失函数和坐标预测单元中的损失函数均为上述计算式(2)。
图9所示为本公开一示例性实施例提供的基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型的流程示意图。在本公开图4所示实施例的基础上延伸出本公开图9所示实施例,下面着重叙述图9所示实施例与图4所示实施例的不同之处,相同之处不再赘述。
如图9所示,在本公开实施例提供的网络模型训练方法中,基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型步骤,包括如下步骤。
步骤222,基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作。
步骤223,基于样本图像和进行第一次参数调整操作后的初始网络模型确定与第一训练数据对应的第三训练数据,其中,第三训练数据包括第三病灶区域坐标信息和第三病灶类型信息。
步骤224,基于第一训练数据和第三训练数据对进行第一次参数调整操作后的初始网络模型进行第二次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
在实际应用过程中,首先基于样本图像确定第一训练数据,然后确定初始网络模型,将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,并基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,然后基于样本图像和进行第一次参数调整操作后的初始网络模型确定与第一训练数据对应的第三训练数据,并基于第一训练数据和第三训练数据对进行第一次参数调整操作后的初始网络模型进行第二次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
需要说明的是,针对初始网络模型的参数调整次数,不局限于本公开实施例提及的两次,亦可以是三次、四次或者更多次,直到所生成的用于确定医学图像中的病灶区域的网络模型的预测精准度达到预设要求即可。
本公开实施例提供的网络模型训练方法,实现了对初始网络模型进行多次参数调整操作的目的。因此,与图4所示实施例相比,本公开实施例能够进一步提高最终生成的用于确定医学图像中的病灶区域的网络模型的预测精准度。
图10所示为本公开一示例性实施例提供的基于样本图像确定第一训练数据的流程示意图。在本公开图3所示实施例的基础上延伸出本公开图10所示实施例,下面着重叙述图10所示实施例与图3所示实施例的不同之处,相同之处不再赘述。
如图10所示,在本公开实施例提供的网络模型训练方法中,基于样本图像确定第一训练数据步骤,包括如下步骤。
步骤11,确定包括病灶区域的样本图像和标记规则。
示例性地,标记规则基于人工(比如医生)预先确定。比如,标记规则为标记出样本图像中的病灶区域对应的病灶区域坐标信息和病灶类型信息。
步骤12,基于标记规则对样本图像进行标记操作,以生成第一训练数据。
在实际应用过程中,首先确定包括病灶区域的样本图像和标记规则,并基于标记规则对样本图像进行标记操作以生成第一训练数据,然后确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
本公开实施例提供的网络模型训练方法,通过确定包括病灶区域的样本图像和标记规则,并基于标记规则对样本图像进行标记操作,以生成第一训练数据的方式,实现了基于样本图像确定第一训练数据的目的。由于标记规则可基于样本图像的实际情况预先确定,因此,本公开实施例能够有效提高标记灵活性,进而提高所训练的用于确定医学图像中的病灶区域的网络模型的适应能力和应用广泛性。
图11所示为本公开一示例性实施例提供的病灶区域确定方法的流程示意图。如图11所示,本公开实施例提供的病灶区域确定方法包括如下步骤。
步骤50,确定需要确定病灶区域的医学图像。
步骤60,将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息。
需要说明的是,步骤60中提及的用于确定医学图像中的病灶区域的网络模型,可以基于上述任一实施例提及的网络模型训练方法获得。
在实际应用过程中,首先确定需要确定病灶区域的医学图像,然后将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息。
本公开实施例提供的病灶区域确定方法,通过将需要确定病灶区域的医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息的方式,实现了确定医学图像中的病灶区域坐标信息的目的。由于本公开实施例提供的病灶区域确定方法是基于用于确定医学图像中的病灶区域的网络模型实现的,因此,与现有方案相比,本公开实施例无需对需要确定病灶区域的医学图像进行复杂的图像增强、滤波变换等处理操作,进而避免了因图像质量等因素导致的病灶区域坐标信息预测失败等情况。即,本公开实施例所提供的病灶区域确定方法具备稳定性高、鲁棒性好等优势。
图12所示为本公开另一示例性实施例提供的病灶区域确定方法的流程示意图。在本公开图11所示实施例的基础上延伸出本公开图12所示实施例,下面着重叙述图12所示实施例与图11所示实施例的不同之处,相同之处不再赘述。
如图12所示,在本公开实施例提供的病灶区域确定方法中,在将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息步骤后,还包括如下步骤。
步骤70,对医学图像进行区域划分操作,以生成多个划分区域。
步骤80,基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系。
在实际应用过程中,首先确定需要确定病灶区域的医学图像,然后将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息,继而对医学图像进行区域划分操作以生成多个划分区域,并基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系。
本公开实施例提供的病灶区域确定方法,通过对医学图像进行区域划分操作以生成多个划分区域,然后基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系的方式,实现了确定病灶区域坐标信息与多个划分区域的位置关系的目的。由于借助病灶区域坐标信息与多个划分区域的位置关系能够更好地实现病灶区域定位操作,因此,本公开实施例能够进一步辅助后续的疾病诊断操作。
图13所示为本公开一示例性实施例提供的医学图像的区域划分示意图。如图13所示,本公开实施例提供的医学图像为包括肺野区域的医学图像。在本公开实施例中,该医学图像包括关键点1至16,并基于该关键点1至16之间的对应关系生成了多条区域划分线,该多条区域划分线将肺野区域划分为了多个划分区域。具体地,将肺野区域划分为上野内带区域、上野中带区域、上野外带区域、中野内带区域、中野中带区域、中野外带区域、下野内带区域、下野中带区域和下野外带区域。
示例性地,在病灶区域确定过程中,可基于所确定的病灶区域坐标信息确定该病灶区域于上述多个划分区域的位置关系,进而生成结构化报告供医生参考。比如,右肺的上野外带区域可见结节影。
应当理解,关键点的具体位置以及具体数量可根据实际情况确定,本公开实施例对此不进行统一限定。
图14所示为本公开一示例性实施例提供的对医学图像进行区域划分操作,以生成多个划分区域的流程示意图。在本公开图12所示实施例的基础上延伸出本公开图14所示实施例,下面着重叙述图14所示实施例与图12所示实施例的不同之处,相同之处不再赘述。
如图14所示,在本公开实施例提供的病灶区域确定方法中,对医学图像进行区域划分操作,以生成多个划分区域步骤,包括如下步骤。
步骤71,将医学图像输入至关键点网络模型,以确定医学图像对应的多个关键点的坐标信息集合,其中,坐标信息集合用于对医学图像进行区域划分操作。
可选地,关键点网络模型为卷积神经网络(Convolutional Neural Networks,CNN)模型。
步骤72,基于坐标信息集合对医学图像进行区域划分操作,以生成多个划分区域。
在实际应用过程中,首先确定需要确定病灶区域的医学图像,然后将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息,继而将医学图像输入至关键点网络模型,以确定医学图像对应的多个关键点的坐标信息集合,并基于坐标信息集合对医学图像进行区域划分操作,以生成多个划分区域,最后基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系。
本公开实施例提供的病灶区域确定方法,通过将医学图像输入至关键点网络模型,以确定医学图像对应的多个关键点的坐标信息集合,然后基于坐标信息集合对医学图像进行区域划分操作以生成多个划分区域的方式,实现了对医学图像进行区域划分操作以生成多个划分区域的目的。由于本公开实施例提及的区域划分操作是基于关键点网络模型实现的,因此,与现有方案相比,本公开实施例无需对需要进行区域划分的医学图像进行复杂的图像增强、滤波变换等处理操作,进而避免了因图像质量等因素导致的区域划分失败等情况。又由于本公开实施例将区域划分问题转换为了关键点坐标信息的定位问题,因此,本公开实施例能够极大简化区域划分操作的划分复杂度。
图15所示为本公开一示例性实施例提供的基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系的流程示意图。在本公开图12所示实施例的基础上延伸出本公开图15所示实施例,下面着重叙述图15所示实施例与图12所示实施例的不同之处,相同之处不再赘述。
如图15所示,在本公开实施例提供的病灶区域确定方法中,基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系步骤,包括如下步骤。
步骤81,基于病灶区域坐标信息确定病灶区域的重心的位置信息。
步骤82,基于重心和多个划分区域的位置关系确定病灶区域与多个划分区域的位置关系。
示例性地,限定病灶区域的重心的位置信息即为病灶区域的位置信息。
在实际应用过程中,首先确定需要确定病灶区域的医学图像,然后将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息,继而对医学图像进行区域划分操作以生成多个划分区域,最后基于病灶区域坐标信息确定病灶区域的重心的位置信息,并基于重心和多个划分区域的位置关系确定病灶区域与多个划分区域的位置关系。
本公开实施例提供的病灶区域确定方法,通过确定病灶区域的重心的位置信息,并基于病灶区域的重心和多个划分区域的位置关系确定病灶区域与多个划分区域的位置信息的方式,实现了确定医学图像的病灶区域与多个划分区域之间的位置关系的目的。由于病灶区域的形状及体积千变万化,因此,本公开实施例通过基于病灶区域的重心确定病灶区域的相对位置信息的方式,有效保证了病灶区域的定位精准度。
下面结合具体医学图像描述与图15所示实施例提供的病灶区域确定方法对应的具体应用实例。
图16所示为基于图13所示区域划分情况对包括肺野区域的医学图像的病灶区域进行定位的定位示意图。如图16所示,通过将肺野区域的关键点按照预设规则连线后,即可框定出医学图像中的左肺野区域和右肺野区域。具体地,基于第一方向关键点1和2生成第一连接线,基于第一方向关键点3和4生成第二连接线,基于第二方向关键点9和10生成第三连接线,基于第二方向关键点11和12生成第四连接线。那么,基于第一方向关键点1、2、3和4以及第二方向关键点9、10、11和12的位置信息可知,第一连接线至第四连接线能够共同形成框定左肺野区域的轮廓线。同样地,基于第一方向关键点5、6、7和8以及第二方向关键点13、14、15和16所形成的连接线能够形成框定右肺野区域的轮廓线。具体连线方式可参照上述针对左肺野区域的轮廓线的连线方式,本公开实施例不再赘述。
继续参照图16所示,在本公开实施例中,医学图像中的病灶区域包括病灶区域M和病灶区域N,其中,病灶区域M具有规则的边界,病灶区域M的边界在该医学图像中呈现为矩形框,并且病灶区域M的重心为m;病灶区域N具有不规则的边界,病灶区域N的边界在该医学图像中呈现为不规则多边形框,并且病灶区域N的重心为n。
在实际应用过程中,当需要对病灶区域进行定位时,可基于病灶区域的重心的位置信息确定病灶区域的位置信息。比如,由于病灶区域M的重心m位于左肺野区域的中野中带,那么可确定病灶区域M位于左肺野区域的中野中带。示例性地,在辅助医生诊断过程中,则可描述为“左肺野区域中野中带可见病灶区域M”。又比如,由于病灶区域N的重心n位于右肺野区域的上野中带,那么可确定病灶区域N位于右肺野区域的 上野中带。示例性地,在辅助医生诊断过程中,则可描述为“右肺野区域上野中带可见病灶区域N”。
图17所示为本公开一示例性实施例提供的网络模型训练装置的结构示意图。如图17所示,本公开实施例提供的网络模型训练装置包括:
第一训练数据确定模块100,用于基于样本图像确定第一训练数据,其中,样本图像包括病灶区域,第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息;
训练模块200,用于确定初始网络模型,并基于样本图像训练初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
图18所示为本公开一示例性实施例提供的训练模块的结构示意图。在本公开图17所示实施例的基础上延伸出本公开图18所示实施例,下面着重叙述图18所示实施例与图17所示实施例的不同之处,相同之处不再赘述。
如图18所示,在本公开实施例提供的网络模型训练装置中,训练模块200包括:
第二训练数据确定单元210,用于将样本图像输入至初始网络模型,以确定与第一训练数据对应的第二训练数据,其中,第二训练数据包括第二病灶区域坐标信息和第二病灶类型信息;
训练单元220,用于基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
图19所示为本公开另一示例性实施例提供的训练模块的结构示意图。在本公开图18所示实施例的基础上延伸出本公开图19所示实施例,下面着重叙述图19所示实施例与图18所示实施例的不同之处,相同之处不再赘述。
如图19所示,在本公开实施例提供的网络模型训练装置中,训练单元220包括:
第一训练子单元2210,用于基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
图20所示为本公开一示例性实施例提供的第一训练子单元的结构示意图。在本公开图19所示实施例的基础上延伸出本公开图20所示实施例,下面着重叙述图20所示实施例与图19所示实施例的不同之处,相同之处不再赘述。
如图20所示,在本公开实施例提供的网络模型训练装置中,第一训练子单元2210包括:
第一网络参数调整子单元22110,用于基于第二病灶类型信息和第一病灶类型信息调整预测模型中的类型信息预测子模型的网络参数;
第二网络参数调整子单元22120,用于基于调整后的类型信息预测子模型调整图像特征提取模型的网络参数;
第三网络参数调整子单元22130,用于基于调整后的图像特征提取模型调整预测模型中的坐标信息预测子模型的网络参数。
图21所示为本公开又一示例性实施例提供的训练模块的结构示意图。在本公开图19所示实施例的基础上延伸出本公开图21所示实施例,下面着重叙述图21所示实施例与图19所示实施例的不同之处,相同之处不再赘述。
如图21所示,在本公开实施例提供的网络模型训练装置中,第一训练子单元2210包括:
第二训练子单元22140,用于基于第二训练数据中的第二病灶类型信息与第一训练数据中的第一病灶类型信息调整预测模型的网络参数和图像特征提取模型的网络参数,基于第二训练数据中的第二病灶区域坐标信息和第一训练数据中的第一病灶区域坐标信息调整图像特征提取模型的网络参数,以生成用于确定医学图像中的病灶区域的网络模型。
图22所示为本公开一示例性实施例提供的训练单元的结构示意图。在本公开图18所示实施例的基础上延伸出本公开图22所示实施例,下面着重叙述图22所示实施例与图18所示实施例的不同之处,相同之处不再赘述。
如图22所示,在本公开实施例提供的网络模型训练装置中,训练单元220包括:
第一次参数调整子单元2220,用于基于第一训练数据和第二训练数据对初始网络模型进行第一次参数调整操作;
第三训练数据确定子单元2230,用于基于样本图像和进行第一次参数调整操作后的初始网络模型确定与第一训练数据对应的第三训练数据,其中,第三训练数据包括第三病灶区域坐标信息和第三病灶类型信息;
第二次参数调整子单元2240,用于基于第一训练数据和第三训练数据对进行第一次参数调整操作后的初始网络模型进行第二次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
图23所示为本公开一示例性实施例提供的第一训练数据确定模块的结构示意图。在本公开图17所示实施例的基础上延伸出本公开图23所示实施例,下面着重叙述图23所示实施例与图17所示实施例的不同之处,相同之处不再赘述。
如图23所示,在本公开实施例提供的网络模型训练装置中,第一训练数据确定模块100包括:
确定单元110,用于确定包括病灶区域的样本图像和标记规则;
第一训练数据生成单元120,用于基于标记规则对样本图像进行标记操作,以生成第一训练数据。
图24所示为本公开一示例性实施例提供的病灶区域确定装置的结构示意图。如图24所示,本公开实施例提供的病灶区域确定装置包括:
图像确定模块500,用于确定需要确定病灶区域的医学图像;
病灶区域确定模块600,用于将医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定医学图像的病灶区域坐标信息。
图25所示为本公开另一示例性实施例提供的病灶区域确定装置的结构示意图。在本公开图24所示实施例的基础上延伸出本公开图25所示实施例,下面着重叙述图25所示实施例与图24所示实施例的不同之处,相同之处不再赘述。
如图25所示,本公开实施例提供的病灶区域确定装置还包括:
划分区域生成模块700,用于对医学图像进行区域划分操作,以生成多个划分区域;
位置关系确定模块800,用于基于病灶区域坐标信息确定病灶区域与多个划分区域的位置关系。
图26所示为本公开一示例性实施例提供的划分区域生成模块的结构示意图。在本公开图25所示实施例的基础上延伸出本公开图26所示实施例,下面着重叙述图26所示实施例与图25所示实施例的不同之处,相同之处不再赘述。
如图26所示,在本公开实施例提供的病灶区域确定装置中,划分区域生成模块700包括:
坐标信息集合确定单元710,用于将医学图像输入至关键点网络模型,以确定医学图像对应的多个关键点的坐标信息集合,其中,坐标信息集合用于对医学图像进行区域划分操作;
区域划分单元720,用于基于坐标信息集合对医学图像进行区域划分操作,以生成多个划分区域。
图27所示为本公开一示例性实施例提供的位置关系确定模块的结构示意图。在本公开图25所示实施例的基础上延伸出本公开图27所示实施例,下面着重叙述图27所示实施例与图25所示实施例的不同之处,相同之处不再赘述。
如图27所示,在本公开实施例提供的病灶区域确定装置中,位置关系确定模块800包括:
重心确定单元810,用于基于病灶区域坐标信息确定病灶区域的重心的位置信息;
位置关系确定单元820,用于基于重心和多个划分区域的位置关系确定病灶区域与多个划分区域的位置关系。
应当理解,图17至图23提供的网络模型训练装置中的第一训练数据确定模块100和训练模块200,以及第一训练数据确定模块100中包括的确定单元110和第一训练数据生成单元120,以及训练模块200中包括的第二训练数据确定单元210和训练单元220,以及训练单元220中包括的第一训练子单元2210、第一次参数调整子单元2220、第三训练数据确定子单元2230和第二次参数调整子单元2240,以及第一训练子单元2210中包括的第一网络参数调整子单元22110、第二网络参数调整子单元22120、第三网络参数调整子单元22130和第二训练子单元22140的操作和功能可以参考上述图3至图10提供的网络模型训练方法,为了避免重复,在此不再赘述。
此外,应当理解,图24至图27提供的病灶区域确定装置中的图像确定模块500、病灶区域确定模块600、划分区域生成模块700和位置关系确定模块800,以及划分区域生成模块700中包括的坐标信息集合确定单元710和区域划分单元720,以及位置关系确定模块800中包括的重心确定单元810和位置关系确定单元820的操作和功能可以参考上述图11至图15提供的病灶区域确定方法,为了避免重复,在此不再赘述。
下面,参考图28来描述根据本申请实施例的电子设备。图28所示为本公开一示例性实施例提供的电子设备的结构示意图。
如图28所示,电子设备90包括一个或多个处理器901和存储器902。
处理器901可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备90中的其他组件以执行期望的功能。
存储器902可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器11可以运行所述程序指令,以实现上文所述的本申请的各个实施例的网络模型训练方法、病灶区域确定方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如医学图像等各种内容。
在一个示例中,电子设备90还可以包括:输入装置903和输出装置904,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
该输入装置903可以包括例如键盘、鼠标等等。
该输出装置904可以向外部输出各种信息,包括确定出的病灶区域信息等。该输出装置904可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图28中仅示出了该电子设备90中与本申请有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备90还可以包括任何其他适当的组件。
除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述描述的根据本申请各种实施例的网络模型训练方法、病灶区域确定方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述描述的根据本申请各种实施例的网络模型训练方法、病灶区域确定方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者 可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。
本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (19)

  1. 一种网络模型训练方法,包括:
    基于样本图像确定第一训练数据,其中,所述样本图像包括病灶区域,所述第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息;
    确定初始网络模型,并基于所述样本图像训练所述初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
  2. 根据权利要求1所述的方法,其中,所述基于所述样本图像训练所述初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型,包括:
    将所述样本图像输入至所述初始网络模型,以确定与所述第一训练数据对应的第二训练数据,其中,所述第二训练数据包括第二病灶区域坐标信息和第二病灶类型信息;
    基于所述第一训练数据和所述第二训练数据对所述初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
  3. 根据权利要求2所述的方法,其中,所述初始网络模型包括信号连接的图像特征提取模型和预测模型,所述基于所述第一训练数据和所述第二训练数据对所述初始网络模型进行第一次参数调整操作,包括:
    基于所述第二训练数据中的所述第二病灶类型信息与所述第一训练数据中的所述第一病灶类型信息调整所述预测模型的网络参数和所述图像特征提取模型的网络参数。
  4. 根据权利要求3所述的方法,其中,所述预测模型包括坐标信息预测子模型和类型信息预测子模型,所述基于所述第二训练数据中的所述第二病灶类型信息与所述第一训练数据中的所述第一病灶类型信息调整所述预测模型的网络参数和所述图像特征提取模型的网络参数,包括:
    基于所述第二病灶类型信息和所述第一病灶类型信息调整所述预测模型中的所述类型信息预测子模型的网络参数;
    基于调整后的所述类型信息预测子模型调整所述图像特征提取模型的网络参数;
    基于调整后的所述图像特征提取模型调整所述预测模型中的所述坐标信息预测子模型的网络参数。
  5. 根据权利要求4所述的方法,其中,所述类型信息预测子模型中的损失函数包括基于预测概率生成的对数函数。
  6. 根据权利要求4所述的方法,其中,所述类型信息预测子模型中的损失函数和/或所述坐标信息预测子模型中的损失函数为交叉熵损失函数。
  7. 根据权利要求3所述的方法,其中,所述基于所述第一训练数据和所述第二训练数据对所述初始网络模型进行第一次参数调整操作,还包括:
    基于所述第二训练数据中的所述第二病灶区域坐标信息和所述第一训练数据中的所述第一病灶区域坐标信息调整所述图像特征提取模型的网络参数。
  8. 根据权利要求3至7任一所述的方法,其中,所述图像特征提取模型包括ResNext-50网络模型和全景特征金字塔网络模型。
  9. 根据权利要求2所述的方法,其中,所述基于所述第一训练数据和所述第二训练数据对所述初始网络模型进行第一次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型,包括:
    基于所述第一训练数据和所述第二训练数据对所述初始网络模型进行第一次参数调整操作;
    基于所述样本图像和进行所述第一次参数调整操作后的初始网络模型确定与所述第一训练数据对应的第三训练数据,其中,所述第三训练数据包括第三病灶区域坐标信息和第三病灶类型信息;
    基于所述第一训练数据和所述第三训练数据对进行所述第一次参数调整操作后的初始网络模型进行第二次参数调整操作,以生成用于确定医学图像中的病灶区域的网络模型。
  10. 根据权利要求1至7任一所述的方法,其中,所述基于样本图像确定第一训练数据,包括:
    确定包括病灶区域的样本图像和标记规则;
    基于所述标记规则对所述样本图像进行标记操作,以生成第一训练数据。
  11. 根据权利要求1至7任一所述的方法,其中,所述样本图像为包括肺结核病灶区域的肺部图像,所述第一病灶类型信息包括原发性肺结核、血行播散型肺结核、继发性肺结核、气管支气管肺结核、结核性胸膜炎、陈旧性肺结核中的至少一种。
  12. 一种病灶区域确定方法,包括:
    确定需要确定病灶区域的医学图像;
    将所述医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定所述医学图像的病灶区域坐标信息,其中,所述网络模型基于上述权利要求1至11任一所述的网络模型训练方法获得。
  13. 根据权利要求12所述的方法,其中,在所述将所述医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定所述医学图像的病灶区域坐标信息后,还包括:
    对所述医学图像进行区域划分操作,以生成多个划分区域;
    基于所述病灶区域坐标信息确定所述病灶区域与所述多个划分区域的位置关系。
  14. 根据权利要求13所述的方法,其中,所述对所述医学图像进行区域划分操作,以生成多个划分区域,包括:
    将所述医学图像输入至关键点网络模型,以确定所述医学图像对应的多个关键点的坐标信息集合,其中,所述坐标信息集合用于对所述医学图像进行区域划分操作;
    基于所述坐标信息集合对所述医学图像进行区域划分操作,以生成所述多个划分区域。
  15. 根据权利要求13或14所述的方法,其中,所述基于所述病灶区域坐标信息确定所述病灶区域与所述多个划分区域的位置关系,包括:
    基于所述病灶区域坐标信息确定所述病灶区域的重心的位置信息;
    基于所述重心和所述多个划分区域的位置关系确定所述病灶区域与所述多个划分区域的位置关系。
  16. 一种网络模型训练装置,包括:
    第一训练数据确定模块,用于基于样本图像确定第一训练数据,其中,所述样本图像包括病灶区域,所述第一训练数据包括标记的第一病灶区域坐标信息和第一病灶类型信息;
    训练模块,用于确定初始网络模型,并基于所述样本图像训练所述初始网络模型,以生成用于确定医学图像中的病灶区域的网络模型。
  17. 一种病灶区域确定装置,包括:
    图像确定模块,用于确定需要确定病灶区域的医学图像;
    病灶区域确定模块,用于将所述医学图像输入至用于确定医学图像中的病灶区域的网络模型,以确定所述医学图像的病灶区域坐标信息,其中,所述网络模型基于上述权利要求1至11任一所述的网络模型训练方法获得。
  18. 一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1至11任一所述的网络模型训练方法,或者执行上述权利要求12至15任一所述的病灶区域确定方法。
  19. 一种电子设备,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于执行上述权利要求1至11任一所述的网络模型训练方法,或者执行上述权利要求12至15任一所述的病灶区域确定方法。
PCT/CN2020/092570 2019-10-31 2020-05-27 网络模型训练方法及装置、病灶区域确定方法及装置 WO2021082416A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911049680.5A CN110827294A (zh) 2019-10-31 2019-10-31 网络模型训练方法及装置、病灶区域确定方法及装置
CN201911049680.5 2019-10-31

Publications (1)

Publication Number Publication Date
WO2021082416A1 true WO2021082416A1 (zh) 2021-05-06

Family

ID=69551516

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092570 WO2021082416A1 (zh) 2019-10-31 2020-05-27 网络模型训练方法及装置、病灶区域确定方法及装置

Country Status (2)

Country Link
CN (1) CN110827294A (zh)
WO (1) WO2021082416A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782221A (zh) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 基于自训练学习的疾病预测装置、设备及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827294A (zh) * 2019-10-31 2020-02-21 北京推想科技有限公司 网络模型训练方法及装置、病灶区域确定方法及装置
CN111353975A (zh) * 2020-02-24 2020-06-30 北京推想科技有限公司 网络模型训练方法及装置、病灶定位方法及装置
CN111383328B (zh) * 2020-02-27 2022-05-20 西安交通大学 一种面向乳腺癌病灶的3d可视化方法及系统
CN111325739B (zh) * 2020-02-28 2020-12-29 推想医疗科技股份有限公司 肺部病灶检测的方法及装置,和图像检测模型的训练方法
CN111445456B (zh) * 2020-03-26 2023-06-27 推想医疗科技股份有限公司 分类模型、网络模型的训练方法及装置、识别方法及装置
CN111899848B (zh) * 2020-08-05 2023-07-07 中国联合网络通信集团有限公司 图像识别方法及设备
TWI777319B (zh) * 2020-12-03 2022-09-11 鴻海精密工業股份有限公司 幹細胞密度確定方法、裝置、電腦裝置及儲存介質
CN112489794A (zh) * 2020-12-18 2021-03-12 推想医疗科技股份有限公司 一种模型的训练方法、装置、电子终端及存储介质
CN116310627B (zh) * 2023-01-16 2024-02-02 浙江医准智能科技有限公司 模型训练方法、轮廓预测方法、装置、电子设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563123A (zh) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
CN108615237A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种肺部图像处理方法及图像处理设备
CN110276411A (zh) * 2019-06-28 2019-09-24 腾讯科技(深圳)有限公司 图像分类方法、装置、设备、存储介质和医疗电子设备
CN110827294A (zh) * 2019-10-31 2020-02-21 北京推想科技有限公司 网络模型训练方法及装置、病灶区域确定方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9218524B2 (en) * 2012-12-06 2015-12-22 Siemens Product Lifecycle Management Software Inc. Automatic spatial context based multi-object segmentation in 3D images
CN110363768B (zh) * 2019-08-30 2021-08-17 重庆大学附属肿瘤医院 一种基于深度学习的早期癌病灶范围预测辅助系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563123A (zh) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
CN108615237A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种肺部图像处理方法及图像处理设备
CN110276411A (zh) * 2019-06-28 2019-09-24 腾讯科技(深圳)有限公司 图像分类方法、装置、设备、存储介质和医疗电子设备
CN110827294A (zh) * 2019-10-31 2020-02-21 北京推想科技有限公司 网络模型训练方法及装置、病灶区域确定方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782221A (zh) * 2021-09-16 2021-12-10 平安科技(深圳)有限公司 基于自训练学习的疾病预测装置、设备及存储介质

Also Published As

Publication number Publication date
CN110827294A (zh) 2020-02-21

Similar Documents

Publication Publication Date Title
WO2021082416A1 (zh) 网络模型训练方法及装置、病灶区域确定方法及装置
Zhang et al. Joint craniomaxillofacial bone segmentation and landmark digitization by context-guided fully convolutional networks
CN110766701B (zh) 网络模型训练方法及装置、区域划分方法及装置
US10949970B2 (en) Methods and apparatus for the application of machine learning to radiographic images of animals
CN110992376A (zh) 基于ct图像的肋骨分割方法、装置、介质及电子设备
US20220301154A1 (en) Medical image analysis using navigation processing
WO2021151302A1 (zh) 基于机器学习的药品质控分析方法、装置、设备及介质
US11475568B2 (en) Method for controlling display of abnormality in chest x-ray image, storage medium, abnormality display control apparatus, and server apparatus
CN111524109A (zh) 头部医学影像的评分方法和装置、电子设备及存储介质
CN111476772B (zh) 基于医学影像的病灶分析方法和装置
CN111340209A (zh) 网络模型训练方法、图像分割方法、病灶定位方法
Monsi et al. XRAY AI: Lung Disease Prediction using machine learning
JP2023175011A (ja) 文書作成支援装置、方法およびプログラム
Kara et al. Identification and localization of endotracheal tube on chest radiographs using a cascaded convolutional neural network approach
US20230377149A1 (en) Learning apparatus, learning method, trained model, and program
CN115053296A (zh) 使用机器学习的改进的手术报告生成方法及其设备
JP7007469B2 (ja) 医療文書作成支援装置、方法およびプログラム、学習済みモデル、並びに学習装置、方法およびプログラム
Ibrahim et al. Lung Segmentation Using ResUnet++ Powered by Variational Auto Encoder-Based Enhancement in Chest X-ray Images
Hsu et al. Development of a deep learning model for chest X-ray screening
KR20240048294A (ko) 인공지능을 활용한 의료용 증강현실영상 제공 방법, 장치 및 시스템
JP2024054748A (ja) 言語特徴抽出モデルの生成方法、情報処理装置、情報処理方法及びプログラム
EP4309121A1 (en) Detecting abnormalities in an x-ray image
JP2023020145A (ja) 解析装置、解析方法及びプログラム
DE102021201912A1 (de) Verfahren zur Bereitstellung eines Metadaten-Attributs, das mit einem medizinischen Bild assoziiert ist
KR102136107B1 (ko) 뼈 감쇄된 x-선 영상의 정합 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20881390

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20881390

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/09/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20881390

Country of ref document: EP

Kind code of ref document: A1