CN111583354B - Training method of medical image processing unit and medical image motion estimation method - Google Patents

Training method of medical image processing unit and medical image motion estimation method Download PDF

Info

Publication number
CN111583354B
CN111583354B CN202010382351.9A CN202010382351A CN111583354B CN 111583354 B CN111583354 B CN 111583354B CN 202010382351 A CN202010382351 A CN 202010382351A CN 111583354 B CN111583354 B CN 111583354B
Authority
CN
China
Prior art keywords
image
medical image
scan
flat
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010382351.9A
Other languages
Chinese (zh)
Other versions
CN111583354A (en
Inventor
张正强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202010382351.9A priority Critical patent/CN111583354B/en
Publication of CN111583354A publication Critical patent/CN111583354A/en
Application granted granted Critical
Publication of CN111583354B publication Critical patent/CN111583354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • A61B6/5264Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise due to motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The present application relates to a training method of a medical image processing unit, a medical image motion estimation method, a motion artifact correction method, a medical image segmentation method, and a computer device and a computer readable storage medium. The training method of the medical image processing unit comprises the following steps: acquiring a first medical image of a scanned object; generating a first flat scan image of the scanned object at different scan angles according to the first medical image; generating an image processing result corresponding to the first flat scan image at each scan angle; and taking the first flat scan image and the corresponding image processing result as training samples to train the artificial neural network in the medical image processing unit. Through the method and the device, the problem that enough training samples are difficult to obtain in the related technology is solved, and the beneficial effect of rapidly obtaining a large number of training samples is achieved.

Description

Training method of medical image processing unit and medical image motion estimation method
Technical Field
The present application relates to the field of computer imaging, and in particular to a training method for a medical image processing unit, a medical image motion estimation method, a motion artifact correction method, a medical image segmentation method, and a computer device and a computer readable storage medium.
Background
Deep learning based on artificial neural networks is a popular technique in recent years, and is also being widely used in the medical field. For example, an artifact correction unit, an image super-resolution unit, and a motion parameter estimation unit based on an artificial neural network are used in the related art to perform processing of various medical images.
Training of artificial neural networks requires a large number of training samples. In the related art, a reconstructed image is generally used as a training sample of an artificial neural network. However, the inventors have found during the course of the study that the use of reconstructed images as training samples has the following drawbacks:
on the one hand, due to confidentiality of clinical medical data, it is difficult for researchers to get enough medical data as training samples.
On the other hand, since the amount of raw data of a medical image that is normally stored is very large, not only the efficiency of reconstructing the image is low, but also some raw data is missing, and it is difficult to reconstruct a desired reconstructed image as a training sample.
Disclosure of Invention
The embodiment of the application provides a training method of a medical image processing unit, a medical image motion estimation method, a motion artifact correction method, a medical image segmentation method, a computer device and a computer readable storage medium, so as to at least solve the problem that a sufficient training sample is difficult to obtain in the related art.
In a first aspect, an embodiment of the present application provides a training method of a medical image processing unit, applied to a medical image processing unit including an artificial neural network, including: acquiring a first medical image of a scanned object; generating a first flat scan image of the scanned object at different scan angles according to the first medical image; generating an image processing result corresponding to the first flat scan image at each scan angle; and taking the first flat scan image and the corresponding image processing result as a training sample, and training the artificial neural network in the medical image processing unit.
In some of these embodiments, the medical image processing unit is a medical image motion estimation unit; generating an image processing result corresponding to the first pan image at each scan angle includes: simulating motion under each scanning angle of the first medical image to obtain a second medical image; generating a second flat scan image of the scanned object at each scan angle according to the second medical image, and taking the second flat scan image as an image processing result corresponding to the first flat scan image.
In some of these embodiments, simulating motion at each scan angle of the first medical image, obtaining a second medical image includes: re-projecting the first medical image to obtain projection data of the first medical image; applying a motion effect in projection data corresponding to each of the scan angles in the projection data of the first medical image; and reconstructing the second medical image according to the projection data subjected to the motion influence.
In some of these embodiments, training the artificial neural network in the medical image processing unit using the first pan image and the corresponding image processing result as training samples comprises: and training the artificial neural network in the medical image processing unit by taking the second flat scan image as training data and taking the first flat scan image as a gold standard.
In some of these embodiments, the medical image processing unit is a medical image segmentation unit; generating an image processing result corresponding to the first pan image at each scan angle includes: and labeling an image segmentation result in the first flat-scan image at each scanning angle to obtain a second flat-scan image, and taking the second flat-scan image as an image processing result corresponding to the first flat-scan image.
In some of these embodiments, training the artificial neural network in the medical image processing unit using the first pan image and the corresponding image processing result as training samples comprises: and training an artificial neural network in the medical image processing unit by taking the first flat scan image as training data and the second flat scan image as a gold standard.
In a second aspect, an embodiment of the present application provides a medical image motion estimation method, including: acquiring a third medical image of the scanned object, and generating a third swipe image at different scanning angles according to the third medical image; processing the third swept image by a medical image motion estimation unit trained using the training method of the medical image processing unit according to the first aspect, resulting in a fourth swept image from which the influence of motion of the scanned object under different scan angles is eliminated; comparing the third swipe image and the fourth swipe image; and determining that the scanned object moves under the corresponding scanning angle under the condition that the difference between the third flat scanning image and the fourth flat scanning image is larger than a preset threshold value.
In some of these embodiments, comparing the third swipe image and the fourth swipe image includes: the third flat scanning image and the fourth flat scanning image are subjected to difference to obtain a residual image; and judging whether the difference between the third flat-scan image and the fourth flat-scan image is larger than the preset threshold value or not according to the average pixel value of the residual image.
In a third aspect, an embodiment of the present application provides a motion artifact correction method, including: according to the medical image motion estimation method of the second aspect, a scanning angle at which the scanned object has motion is determined; and when reconstructing an image according to the projection data of the third medical image, reducing the weight of the projection data corresponding to the scanning angle of the movement of the scanned object so as to correct the movement artifact of the third medical image.
In a fourth aspect, an embodiment of the present application provides a medical image segmentation method, including: acquiring a fourth swipe image; and processing the fourth flat-scan image by using the medical image segmentation unit trained by the training method of the medical image processing unit according to the first aspect to obtain a fifth flat-scan image, wherein the fifth flat-scan image is marked with an image segmentation result.
In a fifth aspect, embodiments of the present application provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described in the first aspect, and/or the second aspect, and/or the third aspect, and/or the fourth aspect when executing the computer program.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method according to the first aspect, and/or the second aspect, and/or the third aspect, and/or the fourth aspect.
Compared with the related art, the training method, the medical image motion estimation method, the motion artifact correction method, the medical image segmentation method, the computer device and the computer-readable storage medium of the medical image processing unit provided by the embodiment of the application are used for acquiring a first medical image of a scanned object; generating a first flat scan image of the scanned object at different scan angles according to the first medical image; generating an image processing result corresponding to the first flat scan image at each scan angle; the first flat scanning image and the corresponding image processing result are used as training samples, and the mode of training the artificial neural network in the medical image processing unit solves the problem that enough training samples are difficult to obtain in the related technology, and achieves the beneficial effect of quickly obtaining a large number of training samples.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic structural view of a CT system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a hardware architecture of a computer device according to an embodiment of the present application;
FIG. 3 is a flow chart of a training method of a medical image processing unit according to an embodiment of the present application;
FIG. 4 is a flow chart of a training method of a medical image motion estimation unit according to an embodiment of the present application;
FIG. 5 is a flow chart of a training method of a medical image segmentation unit according to an embodiment of the present application;
FIG. 6 is a flow chart of a medical image motion estimation method according to an embodiment of the present application;
FIG. 7 is a flow chart of a motion artifact correction method according to an embodiment of the present application;
FIG. 8 is a flow chart of a motion artifact correction method according to a preferred embodiment of the present application;
FIG. 9 is a schematic illustration of the motion impact of a pan image on a scanned object according to a preferred embodiment of the present application;
FIG. 10 is a flow chart of a medical image segmentation method according to an embodiment of the present application;
fig. 11 is a flow chart of a dose modulation method according to a preferred embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein refers to two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The methods, units, computer devices, or computer readable storage media referred to herein may be used for non-invasive imaging, such as diagnosis and study of disease, inspection of buildings in the industrial field, etc.; the related system can comprise a CT system and a PET system, and also can comprise a multi-mode hybrid system such as PET-CT and the like. The methods, units, computer devices or computer readable storage media referred to in this application may be integrated with the systems described above or may be relatively independent.
The following describes and illustrates embodiments of the present application using a CT system as an example.
Fig. 1 is a schematic structural diagram of a CT system according to an embodiment of the present application, and as shown in fig. 1, the CT system includes a CT scanning system 100 and a computer device 200. The CT scanning system 100 includes: an examination couch 110 and a scanning unit 120. Wherein the couch 110 is adapted to carry a person to be examined. The couch 110 is movable such that the scanned object of the subject to be examined is moved to a position suitable for examination, such as the position labeled 130 in fig. 1. The scanning component 120 has a radiation source 121 and a detector 122.
The radiation source 121 may be configured to emit radiation to a scanned object of the person to be examined for generating scan data of the medical image. The scanned object of the person to be examined may comprise a substance, tissue, organ, sample, body, or the like, or any other combination. In certain embodiments, the scanned object of the subject may comprise a patient or a portion thereof, i.e., may comprise the head, chest, lung, pleura, mediastinum, abdomen, large intestine, small intestine, bladder, gall bladder, triple, pelvic, diaphysis, extremities, skeleton, blood vessel, or the like, or any combination thereof. The radiation source 121 is configured to generate radiation or other types of radiation. The radiation is able to pass through the scanned object of the person to be examined. Passes through the scanned object of the person to be inspected and is received by the detector 122.
The radiation source 121 may include a radiation generator. The radiation generator may comprise one or more radiation tubes. The tube may emit radiation or a beam of radiation. The source 121 may be an X-ray tube, a cold cathode ion tube, a high vacuum hot cathode tube, a rotating anode tube, or the like. The shape of the emitted radiation beam may be linear, narrow pen-shaped, narrow fan-shaped, cone-shaped, wedge-shaped, or the like, or irregular, or any combination thereof. The fan angle of the beam may be a certain value in the range of 20 deg. to 90 deg.. The tube in the source 121 may be fixed in one position. In some cases, the tube may be translated or rotated.
The detector 122 may be configured to receive radiation from the radiation source 121 or other radiation source. Radiation from the radiation source 121 may pass through the person to be examined and then reach the detector 122. After receiving the radiation, the detector 122 generates a detection result containing a radiation image of the person to be examined. The detector 122 includes a radiation detector or other component. The shape of the radiation detector may be flat, arcuate, circular, or the like, or any combination thereof. The fan angle of the arcuate detector may range from 20 ° to 90 °. The fan angle may be fixed or adjustable according to different circumstances. Different situations include desired image resolution, image size, sensitivity of the detector, stability of the detector, or the like, or any combination thereof. In some embodiments, the pixels of the detector may be a minimum number of detection units, such as a number of detector units (e.g., scintillators or photosensors, etc.). The pixels of the detector may be arranged in a single row, a double row or another number of rows.
The computer device 200 includes a scan control means and an image generation means. Wherein the scanning control device is configured to control the couch 110 and the scanning unit 120 to perform scanning. The image generation means is for generating a medical image from the detection result of the detector 122.
Since the scanning component 120 tends to emit radiation as the scan is performed, in some embodiments, to avoid exposure of an operator of the CT system to such radiation, the computer device 200 may be disposed in a different room than the scanning component 120 so that the operator of the CT system may be in another room, protected from the radiation, and capable of generating and viewing the scan results via the computer device 200.
Fig. 2 is a schematic diagram of a hardware structure of a computer device according to an embodiment of the present application, and as shown in fig. 2, the computer device of the present embodiment includes a processor 211 and a memory 212 storing computer program instructions.
The processor 211 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
In one aspect, in some of these embodiments, the processor 211 may be configured to perform a training method of the medical image processing unit.
In some of these embodiments, the processor 211 is configured to: acquiring a first medical image of a scanned object; generating a first flat scan image of the scanned object at different scan angles according to the first medical image; generating an image processing result corresponding to the first flat scan image at each scan angle; and taking the first flat scan image and the corresponding image processing result as training samples to train the artificial neural network in the medical image processing unit.
In some of these embodiments, the medical image processing unit is a medical image motion estimation unit; the processor 211 is configured to: simulating motion under each scanning angle of the first medical image to obtain a second medical image; a second flat scan image of the scanned object at each scan angle is generated from the second medical image, and the second flat scan image is used as an image processing result corresponding to the first flat scan image.
In some of these embodiments, the processor 211 is configured to: re-projecting the first medical image to obtain projection data of the first medical image; applying a motion effect in projection data corresponding to each scan angle in the projection data of the first medical image; and reconstructing a second medical image according to the projection data subjected to the motion influence.
In some of these embodiments, the processor 211 is configured to: and taking the second flat scan image as training data, taking the first flat scan image as a gold standard, and training an artificial neural network in the medical image processing unit.
In some of these embodiments, the medical image processing unit is a medical image segmentation unit; the processor 211 is configured to: and labeling an image segmentation result in the first flat-scan image at each scanning angle to obtain a second flat-scan image, and taking the second flat-scan image as an image processing result corresponding to the first flat-scan image.
In some of these embodiments, the processor 211 is configured to: and training an artificial neural network in the medical image processing unit by taking the first flat scan image as training data and the second flat scan image as a gold standard.
On the other hand, in some of the embodiments, the processor 211 may be configured to perform a medical image motion estimation method.
In some of these embodiments, the processor 211 is configured to: acquiring a third medical image of the scanned object, and generating a third swipe image at different scanning angles according to the third medical image; processing the third flat scan image by using a medical image motion estimation unit to obtain a fourth flat scan image which eliminates the influence of the motion of the scanned object under different scanning angles; comparing the third swipe image with the fourth swipe image; in the case that the difference between the third and fourth swipe images is greater than a preset threshold, it is determined that there is motion of the scanned object at the corresponding scan angle.
In some of these embodiments, the processor 211 is configured to: taking the difference between the third flat scanning image and the fourth flat scanning image to obtain a residual image; and judging whether the difference between the third flat scanning image and the fourth flat scanning image is larger than a preset threshold value or not according to the average pixel value of the residual image.
In yet another aspect, in some of these embodiments, the processor 211 may be configured to perform a motion artifact correction method.
In some of these embodiments, the processor 211 is configured to: determining a scanning angle of motion of a scanned object according to a medical image motion estimation method;
in reconstructing an image from projection data of the third medical image, the weight of projection data corresponding to a scan angle at which the scanned object is moving is reduced to correct motion artifacts of the third medical image.
In yet another aspect, in some of these embodiments, the processor 211 may be configured to perform a medical image segmentation method.
In some of these embodiments, the processor 211 is configured to acquire a fourth swipe image; and processing the fourth flat scan image by using a medical image segmentation unit to obtain a fifth flat scan image, wherein the fifth flat scan image is marked with an image segmentation result.
Memory 212 may include mass storage for data or instructions. By way of example, and not limitation, memory 212 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, solid state Drive (Solid State Drive, SSD), flash memory, optical Disk, magneto-optical Disk, tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. Memory 212 may include removable or non-removable (or fixed) media, where appropriate. The memory 212 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 212 is a Non-Volatile (Non-Volatile) memory. In a particular embodiment, the Memory 212 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (Electrically Erasable Programmable Read-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (Electrically Alterable Read-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be Static Random-Access Memory (SRAM) or dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory FPMDRAM), extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory EDODRAM), synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory SDRAM), or the like, as appropriate.
Memory 212 may be used to store or cache various data files (e.g., medical images, projection data, operating systems, motion estimation units, artificial neural networks, etc.) that need to be processed and/or used for communication, as well as possible computer program instructions executed by processor 211.
The processor 211 reads and executes the computer program instructions stored in the memory 212 to implement one or more of the training method, the medical image motion estimation method, the motion artifact correction method, and the medical image segmentation method of the medical image processing unit according to the embodiments of the present application.
In some of these embodiments, the computer device may also include a communication interface 213, a display device 214, and a bus 210. As shown in fig. 2, the processor 211, the memory 212, the communication interface 213, and the display device 214 are connected and communicate with each other through the bus 210.
The communication interface 213 is used to implement communication between the modules, devices, units and/or units in the present embodiment. The communication interface 213 may also enable communication with other components such as: and the external equipment, the medical image scanning equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 210 includes hardware, software, or both, coupling components of a computer device to each other. Bus 210 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 210 may include a graphics acceleration interface (Accelerated Graphics Port), AGP or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture), EISA) Bus, front Side Bus (FSB), hyperTransport (HT) interconnect, industry standard architecture (Industry Standard Architecture), ISA) Bus, infiniBand (InfiniBand) interconnect, low Pin Count (LPC) Bus, memory Bus, micro channel architecture (Micro Channel Architecture), MCA Bus, peripheral component interconnect (Peripheral Component Interconnect), PCI-Express (PCI-X) Bus, serial advanced technology attachment (Serial Advanced Technology Attachment, SATA) Bus, video electronics standards association local (Video Electronics Standards Association Local Bus, VLB) Bus, or other suitable Bus, or a combination of two or more of the above. Bus 210 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
Additionally, embodiments of the present application may be implemented by providing a computer-readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement one or more of the training method, the medical image motion estimation method, the motion artifact correction method, and the medical image segmentation method of the medical image processing unit provided in this embodiment.
The embodiment provides a training method of a medical image processing unit, which is applied to the medical image processing unit comprising an artificial neural network. Fig. 3 is a flowchart of a training method of a medical image processing unit according to an embodiment of the present application, as shown in fig. 3, the flowchart comprising the steps of:
step S301, a first medical image of a scanned object is acquired.
The first medical image comprises a medical image obtained by scanning a scanning object to obtain raw data and reconstructing according to the raw data. The medical image may be a CT image or a PET image. The first medical image may be acquired in real-time from an imaging system, such as a CT system, a PET system or a PET-CT system, or may be acquired from a medical image library.
Step S302, generating a first flat scan image of the scanned object under different scanning angles according to the first medical image.
The swipe image (topograph) of the embodiments of the present application is also referred to as a localization map or a surveillance map (scotview). The pan image may be scanned by fixing the bulb at a certain scan angle, and a digital image (Computed Radiography, abbreviated CR) may be generated by a computer. The pan image may also be generated using a medical image using a topograph image generation algorithm. The plain images resemble ordinary radiographs but are much more sharp. In the related art, the flat scan image is mainly used for marking CT scan plan lines, such as displaying scan positions, body positions, inclination angles, layer thicknesses, layer distances, scan directions, times and the like; it can also be used as a high definition photograph without marking, and is also of diagnostic interest.
Since there is a corresponding flat scan image for each scan angle from the same medical image, a large number of flat scan images can be obtained by changing the scan angle. Taking 2400 scanning angles as an example, 2400 scanning images can be obtained by changing the scanning angles of the same medical image one by one around the cross section of the medical image; and then the scanning angles of the images are changed one by one around the coronal plane or the sagittal plane of the images, and 4800 flat scan images can be obtained. Thus, 7200 flat scan images can be obtained from the same medical image. Therefore, a large number of flat scan images can be obtained quickly and simply as training samples through the embodiment of the application. If more medical images are used to generate the swipe images, the number of resulting swipe images will be greater.
Step S303, an image processing result corresponding to the first swipe image at each scanning angle is generated.
Step S304, taking the first flat scan image and the corresponding image processing result as a training sample, and training the artificial neural network in the medical image processing unit.
In the above-described step S303 and step S304, the image processing result corresponding to the pan image may be generated using different processing methods according to the task of the medical image processing unit. For example, when the medical image processing unit is trained to estimate the motion of the scanned object, then image processing results are generated in the case of the scanned object motion; the image processing result may be a plain scan image superimposed with the influence of motion, or may be a label for marking whether or not there is motion of the scanned object. For another example, when the medical image processing unit is trained to perform medical image segmentation, then an image segmentation result is generated; the image segmentation result may be an image obtained by adding a label frame to the pan-scan image, or may be other information for representing the position of the label frame, for example, label frame coordinate information.
Through the steps S301 to S304, the flat scan image is used as training data of the artificial neural network, so that the problem that enough training samples are difficult to obtain in the related technology is solved, and the beneficial effect of quickly obtaining a large number of training samples is realized.
In the embodiment of the application, another advantage of adopting the medical image processing unit based on the artificial neural network is that after the artificial neural network is obtained through training, the artificial neural network can be transplanted or copied into other systems very conveniently to perform the same task.
The medical image processing unit of the present embodiment may further include a preprocessing module for preprocessing the input pan image. For example, the preprocessing module extracts the pan image for image segmentation, or extracts a region of interest of the pan image, or cuts or scales the size of the pan image, reduces the resolution of the pan image, and so on. In addition, in the present embodiment, the preprocessing module is further configured to convert the flat-scan image into a data format that can be processed by the artificial neural network, for example, convert the flat-scan image into tensor data, so that the flat-scan image can be processed by the artificial neural network implemented using the TensorFlow framework. And the artificial neural network in the medical image processing unit obtains a training result by processing tensor data of the swept image.
In this embodiment, the artificial neural network in the medical image processing unit may be any one of known artificial neural networks or a modification or further evolution of known artificial neural networks, for example, the artificial neural network may include, but is not limited to, at least one of: convolutional neural networks, recurrent neural networks, deep reinforcement learning neural networks, generating antagonistic neural networks, and deep belief neural networks.
In the embodiments of the present application, it is not limited whether the training neural network employs supervised learning or unsupervised learning. In this embodiment, the artificial neural network is preferably trained by a supervised learning method. When the artificial neural network is trained in the supervised learning mode, training the artificial neural network by using training data and a label (or an image processing result) corresponding to the training data as a group of data, and updating parameters of the artificial neural network through a gradient descent algorithm and error back propagation until the error between the label of the training data predicted by the artificial neural network and the label corresponding to the training data is smaller than expected (called parameter convergence), thereby obtaining the artificial neural network with complete training. Wherein the tag is also referred to as a gold standard.
In some of these embodiments, the medical image processing unit is a medical image motion estimation unit for estimating motion parameters of the medical image. Fig. 4 is a flowchart of a training method of a medical image motion estimation unit according to an embodiment of the present application, as shown in fig. 4, the flowchart comprising the steps of:
step S401, acquiring a first medical image of a scanned object.
The first medical image comprises a medical image obtained by scanning a scanning object to obtain raw data and reconstructing according to the raw data. The medical image may be a CT image or a PET image. The first medical image may be acquired in real-time from an imaging system, such as a CT system, a PET system or a PET-CT system, or may be acquired from a medical image library.
Step S402, generating a first flat scan image of the scanned object under different scanning angles according to the first medical image.
The flat scan image of the embodiment of the application can be scanned by fixing the bulb tube at a certain scanning angle, and a digital image is generated by a computer. The pan image may also be generated using a medical image using a topograph image generation algorithm.
Step S403, simulating motion at each scanning angle of the first medical image, resulting in a second medical image.
In step S403, a motion may be simulated at each scan angle of the first medical image using a computer simulation method. For example, re-projecting the first medical image to obtain projection data of the first medical image; applying a motion effect in projection data corresponding to each scan angle in the projection data of the first medical image; and reconstructing a second medical image according to the projection data subjected to the motion influence.
Step S404, generating a second flat scan image of the scanned object at each scan angle according to the second medical image, and using the second flat scan image as an image processing result corresponding to the first flat scan image.
In step S404, the second sweep image may also be generated using a medical image using a topograph image generation algorithm.
Step S405, taking the first pan-scan image and the corresponding image processing result as a training sample, and training the artificial neural network in the medical image motion estimation unit.
For example, the artificial neural network in the medical image processing unit is trained using the second flattened image as training data and the first flattened image as a gold standard. In this embodiment, the artificial neural network is trained by a supervised learning method. When the artificial neural network is trained in a supervised learning mode, a second flat scanning image is used as training data, and a first flat scanning image corresponding to the second flat scanning image is used as a gold standard; training the artificial neural network by taking the second flat-scan image and the first flat-scan image as a group of data, and updating parameters of the artificial neural network through a gradient descent algorithm and error back propagation until the error between the flat-scan image predicted by the artificial neural network and the first flat-scan image is smaller than the expected value (called parameter convergence), thereby obtaining the artificial neural network with complete training.
In some of these embodiments, the medical image processing unit is a medical image segmentation unit for segmenting organs and tissues in the medical image. Fig. 5 is a flowchart of a training method of a medical image segmentation unit according to an embodiment of the present application, as shown in fig. 5, the flowchart comprising the steps of:
In step S501, a first medical image of a scanned object is acquired.
The first medical image comprises a medical image obtained by scanning a scanning object to obtain raw data and reconstructing according to the raw data. The medical image may be a CT image or a PET image. The first medical image may be acquired in real-time from an imaging system, such as a CT system, a PET system or a PET-CT system, or may be acquired from a medical image library.
Step S502, generating a first flat scan image of the scanned object under different scanning angles according to the first medical image.
The flat scan image of the embodiment of the application can be scanned by fixing the bulb tube at a certain scanning angle, and a digital image is generated by a computer. The pan image may also be generated using a medical image using a topograph image generation algorithm.
In step S503, the image segmentation result is marked in the first flat scan image at each scan angle, so as to obtain a second flat scan image, and the second flat scan image is used as the image processing result corresponding to the first flat scan image.
The image segmentation result may be an image obtained by adding a label frame to the pan-scan image, or may be other information that is not indicative of the position of the label frame, for example, label frame coordinate information.
Step S504, taking the first flat scan image and the corresponding image processing result as a training sample, and training the artificial neural network in the medical image segmentation unit.
For example, an artificial neural network in a medical image segmentation unit is trained using the first flattened image as training data and the second flattened image as a gold standard. In this embodiment, the artificial neural network is trained by a supervised learning method. When the artificial neural network is trained in a supervised learning mode, the first flat scanning image is used as training data, and the second flat scanning image is used as a gold standard; training the artificial neural network by taking the first flat-scan image and the second flat-scan image as a group of data, updating parameters of the artificial neural network through a gradient descent algorithm and error back propagation until the error between the flat-scan image predicted by the artificial neural network and the second flat-scan image is smaller than expected (called parameter convergence), and obtaining the artificial neural network with complete training.
The embodiment also provides a medical image motion estimation method. The medical image motion estimation method realizes the motion estimation of the medical image based on the medical image motion estimation unit trained by the training method shown in fig. 4. Fig. 6 is a flowchart of a medical image motion estimation method according to an embodiment of the present application, as shown in fig. 6, the flowchart including the steps of:
Step S601, acquiring a third medical image of the scanned object, and generating a third swipe image at different scan angles according to the third medical image.
In step S602, the third flat scan image is processed by using the medical image motion estimation unit, and a fourth flat scan image from which the influence of the motion of the scanned object under different scan angles is eliminated is obtained.
Step S603, the third swipe image and the fourth swipe image are compared.
In step S604, in the case that the difference between the third and fourth swipe images is greater than the preset threshold, it is determined that there is motion of the scanned object under the corresponding scan angle.
Since the medical image motion estimation unit trained by the training method shown in fig. 4 is trained to eliminate the influence of the motion of the scanned object in the input flat-scan image, if the difference between the input third flat-scan image and the fourth flat-scan image output after the processing of the medical image motion estimation unit is smaller than the preset threshold, it is indicated that there is almost no influence of the motion of the scanned object in the third flat-scan image. If the difference between the input third flat scan image and the fourth flat scan image which is output after being processed by the medical image motion estimation unit is larger than a preset threshold value, the influence caused by the motion of the scanned object exists in the third flat scan image, namely, the scanned object moves under the corresponding scanning angle. Therefore, in the above-described embodiment, whether or not there is motion of the scanned object at the corresponding scan angle can be determined by comparing the difference between the third and fourth swipe images.
In some of these embodiments, the magnitude of the difference between the third and fourth swipe images may be determined by the residual images of the two. For example, the third flat scan image and the fourth flat scan image are subjected to difference to obtain a residual image; and judging whether the difference between the third flat scanning image and the fourth flat scanning image is larger than a preset threshold value or not according to the average pixel value of the residual image. If the average pixel value of the residual image is greater than the preset threshold value, the third flat scan image and the fourth flat scan image are indicated to have a larger difference, and the scanned object is indicated to have motion under the corresponding scanning angle.
Based on the medical image motion estimation method shown in fig. 6, the embodiment also provides a motion artifact correction method. Fig. 7 is a flowchart of a motion artifact correction method according to an embodiment of the present application, as shown in fig. 7, the flowchart including the steps of:
step S701, determining a scan angle at which a scanned object has motion according to a medical image motion estimation method.
Since the medical image motion estimation method shown in fig. 6 can determine whether there is motion of the scanned object at the corresponding scan angle from the flat scan image at the scan angle. Therefore, in the present embodiment, the flat scan images obtained at the respective scan angles are processed by the above-described medical image motion estimation method, so that it is possible to distinguish the motion of the scanned object at those scan angles.
In step S702, when reconstructing an image from projection data of a third medical image, a weight of projection data corresponding to a scan angle at which a scanned object has motion is reduced to correct motion artifact of the third medical image.
For example, in a CT image scan, raw data or projection data at a scan angle of 0 to 180 ° is generally used to reconstruct a CT image, but data at a scan angle of less than 180 ° can be reconstructed to obtain a CT image. Particularly, CT image data adopting spiral scanning can cover 360-degree scanning angles, and when image reconstruction is carried out, the weight of projection data corresponding to the scanning angles with motion is selectively reduced, so that not only can motion artifacts in medical images obtained by reconstruction be reduced, but also the resolution of the medical images is not reduced or obviously reduced.
Fig. 8 is a flowchart of a motion artifact correction method according to a preferred embodiment of the present application, as shown in fig. 8, the flowchart comprising the steps of:
step S801, a reconstructed image without motion is acquired.
Step S802, adding motion parameters in the scanning angle (view) direction after the reconstructed image is right.
Step S803, after reconstructing the image, the motion simulation platform generates a flat scan image with motion at different scan angles.
Step S804, generating a plain image corresponding to the reconstructed image without motion by using the simulation platform as a gold standard.
Step S805, training to obtain a medical image motion estimation unit by taking the plain scan images obtained in step S803 and step S804 as training samples.
Step S806, a reconstructed image of the motion to be estimated is acquired, and a flat scan image of each scan angle of the reconstructed image of the motion to be estimated is generated using the simulation platform.
Step S807, the flat scan images obtained in step S806 are input into the medical image motion estimation unit one by one, and corresponding flat scan images without motion are obtained respectively.
Step S808, comparing the plain scan image input to the medical image motion estimation unit and the plain scan image output from the medical image motion estimation unit, determining a scan angle at which the scanned object has motion, and performing motion artifact correction.
In clinical diagnosis, the reconstructed images often have motion artifacts due to movements of the patient's head and heart, which seriously affect the diagnosis of the physician. With the rapid development of neural networks, it is also possible to solve motion artifact correction through network training, and the network training requires a large amount of data to have a better result, so that the data preparation for training is particularly important. In clinic, it is difficult to collect images of many motion artifacts for training due to some post-processing methods. According to the embodiment of the application, the problem of insufficient network training data is solved, and the plain scan images simulating movements in different directions can be obtained for network training.
To verify the link between the pan image and the motion of the scanned object, two sets of images are simulated in the preferred embodiment, one set of images being a medical image with motion artifacts with motion parameters and the other set of images being medical images without motion parameters. And respectively generating flat-scan images under each scanning angle for comparison of the two sets of medical images, and observing the comparison condition of the flat-scan images under different scanning angles. Fig. 9 is a schematic view of the influence of a flat scan image on the motion of a scanned object according to the preferred embodiment of the present application, where (a) in fig. 9 is a flat scan image simulated when the scan angle generated from the first set of images is 135 °, (b) is a residual image obtained by subtracting the image (a) processed by the medical image motion estimation unit from the image (a), and it can be seen that the difference between the images (a) and (b) is small; in fig. 9, (c) is a flat-scan image simulated when the scan angle generated from the first set of images is 0 °, and (d) is a residual image obtained by subtracting the flat-scan image simulated when the scan angle generated from the second set of images is 0 °, it can be seen that the difference between the two images (c) and (d) is large, which means that the motion of the scanned object may affect the flat-scan image.
Through the embodiment, the motion flat scanning images with different body positions and different scanning angles can be simulated, and support is provided for training data preparation of the artificial neural network; the platform for generating the simulated motion of the flat scan image is fast in calculation, and is particularly suitable for motion simulation for processing big data such as whole body CT; the flat scan images of different scan angles can be used to determine which scan angles there is motion of the scanned object.
The medical image segmentation unit is trained based on the training method of the medical image processing unit shown in fig. 5, and in this embodiment, a medical image segmentation method is also provided. Fig. 10 is a flowchart of a medical image segmentation method according to an embodiment of the present application, as shown in fig. 10, the flowchart including the steps of:
in step S1001, a fourth swipe image is acquired.
Step S1002, a medical image segmentation unit is used for processing the fourth flat scan image to obtain a fifth flat scan image, wherein the fifth flat scan image is marked with an image segmentation result.
In the design of imaging medical diagnostic products, engineers often need to consider the dose problem of different organs, so as to avoid causing patients to bear a large amount of unnecessary dose radiation, especially X-ray radiation, and meanwhile, the obtained projection data are different with different doses of different organs, and the reconstructed result also meets the clinical requirement, so that accurate results of different organ segmentation are required.
There are many data preparation methods for training neural networks for image segmentation at present, such as: the threshold segmentation image is used as a gold standard of a network, the image edge information segmentation image is used as a gold standard of the network, but due to the fact that the protection of clinical data is relatively lack, the prepared data is insufficient, meanwhile, due to the fact that medical projection data are very large, the reconstructed image is very slow, and sometimes due to the fact that projection data are lack, a desired reconstructed image is not obtained, and therefore training data are not sufficiently guaranteed.
By adopting the training method provided by the embodiment of the application, only one set or a plurality of sets of reconstructed images are needed. Each set of reconstructed images may include images of different positions, for example, four positions of supine, prone, left lateral and right lateral of the head in advance of the scanning region, or four positions of supine, prone, left lateral and right lateral of the foot in advance of the scanning region, for eight positions in total. By generating the flat scanning images with different scanning angles according to the setting of different parameters for the reconstructed images with the eight body positions, the flat scanning images with different body positions and different angles can be obtained, so that more sufficient data preparation is obtained for training of the artificial neural network, and the accuracy of post-processing data is ensured. The training method of the present embodiment may be applied to any similar training model. In this embodiment, the image data is used to obtain accurate flat scan images of different body positions, and then the segmentation map of different body positions and different organs is trained through the artificial neural network, so as to be used for post-processing display and related algorithm requirements.
Compared with the scheme that other data are adopted as training data of an artificial neural network in the related art, the embodiment of the application can obtain sufficient training data without scanning a large amount of data; according to the embodiment of the application, a large number of flat scan images at different angles can be obtained only by one set of image data, and network data provided for the medical image segmentation unit based on the artificial neural network are more sufficient.
In addition, in the case of PET-CT, since the raw data obtained by scanning is very large, there is a possibility that the raw data may be lost when the processing is required in the later stage. However, PET-CT for whole-body requires different dosages for each region, whereas data of a general whole-body flat scan image is relatively lacking, and a large amount of training data is required for training a relatively good image segmentation image for neural network. In the embodiment of the application, through the whole body medical image, the flat-scan images with different scanning angles are generated by adopting the simulation platform, so that the problem that the flat-scan images without raw data or without raw data but needing the group of data are trained can be solved. Moreover, the simulation platform of the embodiment can be realized by only one simulation PC, and a flat scanning image can be quickly generated based on a flat scanning image simulation algorithm known in the related art without other tools.
The medical image segmentation method described above may be applied to tracer dose modulation in medical scanning. Fig. 11 is a flow chart of a dose modulation method according to a preferred embodiment of the present application, as shown in fig. 11, the flow comprising the steps of:
step S1101, acquiring one or several sets of medical image data of the whole body or half body.
In step S1102, flat scan images of different scan angles are generated using the simulation platform.
Step S1103, manually marking organ marks on the plain scan image;
step S1104 trains a medical image segmentation unit using the pan-scan image and the artificially labeled organ markers.
Step S1105, before scanning, firstly, performing a flat scan by using a scan angle of a fixed bulb of the CT system to obtain a whole-body or half-body flat scan image.
Step S1106, inputting the whole body or half body swipe image into the medical image segmentation unit trained in step S1104, to obtain an image segmentation result.
Step S1107, performing PET scanning tracer dose modulation according to the image segmentation result.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (12)

1. A training method of a medical image processing unit, applied to a medical image processing unit including an artificial neural network, characterized by comprising:
acquiring a first medical image of a scanned object;
generating a first flat scan image of the scanned object at different scan angles according to the first medical image;
generating an image processing result corresponding to the first flat scan image at each scan angle, wherein the image processing result comprises a motion estimation processing result or an image segmentation result of the scanned image;
and taking the first flat scan image and the corresponding image processing result as a training sample, and training the artificial neural network in the medical image processing unit.
2. The method according to claim 1, wherein the medical image processing unit is a medical image motion estimation unit; generating an image processing result corresponding to the first pan image at each scan angle includes:
simulating motion under each scanning angle of the first medical image to obtain a second medical image;
generating a second flat scan image of the scanned object at each scan angle according to the second medical image, and taking the second flat scan image as an image processing result corresponding to the first flat scan image.
3. The method of claim 2, wherein simulating motion at each scan angle of the first medical image to obtain a second medical image comprises:
re-projecting the first medical image to obtain projection data of the first medical image;
applying a motion effect in projection data corresponding to each of the scan angles in the projection data of the first medical image;
and reconstructing the second medical image according to the projection data subjected to the motion influence.
4. The method of claim 2, wherein training the artificial neural network in the medical image processing unit using the first pan image and the corresponding image processing result as training samples comprises:
And training the artificial neural network in the medical image processing unit by taking the second flat scan image as training data and taking the first flat scan image as a gold standard.
5. The method according to claim 1, wherein the medical image processing unit is a medical image segmentation unit; generating an image processing result corresponding to the first pan image at each scan angle includes:
and labeling an image segmentation result in the first flat-scan image at each scanning angle to obtain a second flat-scan image, and taking the second flat-scan image as an image processing result corresponding to the first flat-scan image.
6. The method of claim 5, wherein training the artificial neural network in the medical image processing unit using the first pan image and the corresponding image processing result as training samples comprises:
and training an artificial neural network in the medical image processing unit by taking the first flat scan image as training data and the second flat scan image as a gold standard.
7. A medical image motion estimation method, characterized by comprising:
acquiring a third medical image of the scanned object, and generating a third swipe image at different scanning angles according to the third medical image;
Processing the third swipe image by a medical image motion estimation unit trained using the training method of the medical image processing unit according to any one of claims 2 to 4, resulting in a fourth swipe image that eliminates the influence of motion of the scanned object under different scan angles;
comparing the third swipe image and the fourth swipe image;
and determining that the scanned object moves under the corresponding scanning angle under the condition that the difference between the third flat scanning image and the fourth flat scanning image is larger than a preset threshold value.
8. The method of claim 7, wherein comparing the third swipe image and the fourth swipe image comprises:
the third flat scanning image and the fourth flat scanning image are subjected to difference to obtain a residual image;
and judging whether the difference between the third flat-scan image and the fourth flat-scan image is larger than the preset threshold value or not according to the average pixel value of the residual image.
9. A method of motion artifact correction, comprising:
the medical image motion estimation method according to claim 7 or 8, determining a scan angle at which there is motion of the scanned object;
And when reconstructing an image according to the projection data of the third medical image, reducing the weight of the projection data corresponding to the scanning angle of the movement of the scanned object so as to correct the movement artifact of the third medical image.
10. A medical image segmentation method, characterized by comprising:
acquiring a fourth swipe image;
a medical image segmentation unit trained using the training method of the medical image processing unit according to claim 5 or 6 processes the fourth flattened image resulting in a fifth flattened image, wherein the fifth flattened image is annotated with image segmentation results.
11. Computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements a training method of a medical image processing unit according to any one of claims 1 to 6, and/or a medical image motion estimation method according to claim 7 or 8, and/or a motion artifact correction method according to claim 9, and/or a medical image segmentation method according to claim 10.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a training method of a medical image processing unit according to any one of claims 1 to 6, and/or a medical image motion estimation method according to claim 7 or 8, and/or a motion artifact correction method according to claim 9, and/or a medical image segmentation method according to claim 10.
CN202010382351.9A 2020-05-08 2020-05-08 Training method of medical image processing unit and medical image motion estimation method Active CN111583354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010382351.9A CN111583354B (en) 2020-05-08 2020-05-08 Training method of medical image processing unit and medical image motion estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010382351.9A CN111583354B (en) 2020-05-08 2020-05-08 Training method of medical image processing unit and medical image motion estimation method

Publications (2)

Publication Number Publication Date
CN111583354A CN111583354A (en) 2020-08-25
CN111583354B true CN111583354B (en) 2024-01-02

Family

ID=72112140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010382351.9A Active CN111583354B (en) 2020-05-08 2020-05-08 Training method of medical image processing unit and medical image motion estimation method

Country Status (1)

Country Link
CN (1) CN111583354B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658116B (en) * 2021-07-30 2023-09-15 戴建荣 Artificial intelligence method and system for generating medical images with different body positions

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060315A (en) * 2019-04-22 2019-07-26 深圳安科高技术股份有限公司 A kind of image motion artifact eliminating method and system based on artificial intelligence
WO2019200349A1 (en) * 2018-04-13 2019-10-17 General Electric Company Systems and methods for training a deep learning model for an imaging system
CN110348515A (en) * 2019-07-10 2019-10-18 腾讯科技(深圳)有限公司 Image classification method, image classification model training method and device
EP3629294A1 (en) * 2018-09-27 2020-04-01 Siemens Healthcare GmbH Method of providing a training dataset

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11420075B2 (en) * 2018-09-21 2022-08-23 Wisconsin Alumni Research Foundation System and method for reconstructing image volumes from sparse two-dimensional projection data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019200349A1 (en) * 2018-04-13 2019-10-17 General Electric Company Systems and methods for training a deep learning model for an imaging system
EP3629294A1 (en) * 2018-09-27 2020-04-01 Siemens Healthcare GmbH Method of providing a training dataset
CN110060315A (en) * 2019-04-22 2019-07-26 深圳安科高技术股份有限公司 A kind of image motion artifact eliminating method and system based on artificial intelligence
CN110348515A (en) * 2019-07-10 2019-10-18 腾讯科技(深圳)有限公司 Image classification method, image classification model training method and device

Also Published As

Publication number Publication date
CN111583354A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
KR101428005B1 (en) Method of motion compensation and phase-matched attenuation correction in pet imaging based on a few low-dose ct images
CN111462168B (en) Motion parameter estimation method and motion artifact correction method
US8923577B2 (en) Method and system for identifying regions in an image
EP2245592B1 (en) Image registration alignment metric
US8655040B2 (en) Integrated image registration and motion estimation for medical imaging applications
US10143433B2 (en) Computed tomography apparatus and method of reconstructing a computed tomography image by the computed tomography apparatus
CN110264559B (en) Bone tomographic image reconstruction method and system
JP6349278B2 (en) Radiation imaging apparatus, image processing method, and program
JP2024514494A (en) Artificial intelligence training with InMotion multi-pulse actuated X-ray source tomosynthesis imaging system
US11127153B2 (en) Radiation imaging device, image processing method, and image processing program
CN106446515A (en) Three-dimensional medical image display method and apparatus
CN112150543A (en) Imaging positioning method, device and equipment of medical imaging equipment and storage medium
JP2017143872A (en) Radiation imaging apparatus, image processing method, and program
CN113344876B (en) Deformable registration method between CT and CBCT
CN111583354B (en) Training method of medical image processing unit and medical image motion estimation method
CN115300809B (en) Image processing method and device, computer equipment and storage medium
CN110473241A (en) Method for registering images, storage medium and computer equipment
KR101350496B1 (en) Method to generate a attenuation map of emission tomography and MRI combined imaging system
EP3809376A2 (en) Systems and methods for visualizing anatomical structures
US7116808B2 (en) Method for producing an image sequence from volume datasets
JP6703470B2 (en) Data processing device and data processing method
US20230048231A1 (en) Method and systems for aliasing artifact reduction in computed tomography imaging
US20240029415A1 (en) Simulating pathology images based on anatomy data
JP2011036684A (en) Computer supported image diagnosing system
US20230316550A1 (en) Image processing device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co.,Ltd.

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant