WO2021233315A1 - 一种图像质量优化的方法及系统 - Google Patents

一种图像质量优化的方法及系统 Download PDF

Info

Publication number
WO2021233315A1
WO2021233315A1 PCT/CN2021/094461 CN2021094461W WO2021233315A1 WO 2021233315 A1 WO2021233315 A1 WO 2021233315A1 CN 2021094461 W CN2021094461 W CN 2021094461W WO 2021233315 A1 WO2021233315 A1 WO 2021233315A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature
imaging
quality
sample
Prior art date
Application number
PCT/CN2021/094461
Other languages
English (en)
French (fr)
Inventor
毛佚夫
赵立毅
曹文静
Original Assignee
上海联影医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联影医疗科技股份有限公司 filed Critical 上海联影医疗科技股份有限公司
Priority to EP21808874.8A priority Critical patent/EP4141782A4/en
Priority to US17/446,303 priority patent/US20210390694A1/en
Publication of WO2021233315A1 publication Critical patent/WO2021233315A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/60
    • G06T5/73
    • G06T5/90
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the embodiments of this specification relate to the field of image processing technology, and in particular to a method and system for optimizing image quality.
  • An aspect of the embodiments of this specification provides a method for optimizing image quality, including: acquiring an image to be processed; determining at least one relevant feature of the image, and the at least one relevant feature includes: an acquisition parameter feature related to the image At least one of the imaging parameter feature related to the image, the imaging object feature, and the quality feature of the image; inputting the image and the at least one related feature into an image processing model; based on the image processing model Output to determine the optimized image of the image.
  • An aspect of the embodiments of the present specification provides a method for optimizing image quality, including: acquiring acquired data to be processed; determining at least one related feature of the acquired data, and the at least one related feature of the acquired data includes: At least one of the collection parameter feature related to the collected data, the imaging object feature, and the quality feature of the collected data; the collected data and the at least one related feature are input into the image processing model; based on the output of the image processing model To determine the optimized image data of the collected data.
  • An aspect of the embodiments of this specification provides an image quality optimization system, including: an acquisition module for acquiring an image to be processed; a determining module for determining at least one relevant feature of the image, the at least one relevant feature including : At least one of the acquisition parameter feature related to the image, the imaging parameter feature related to the image, the imaging object feature, and the quality feature of the image; the input module is used to combine the image with the at least one A related feature is input to the image processing model; the optimization module is used to determine the optimized image of the image based on the output of the image processing model.
  • An aspect of the embodiments of this specification provides a system for optimizing image quality, including: an acquisition module for acquiring acquired data to be processed; a determining module for determining at least one relevant feature of the acquired data, the acquired data
  • the at least one related feature includes: at least one of a collection parameter feature related to the collected data, an imaging object feature, and a quality feature of the collected data; an input module for combining the collected data with the at least one
  • the relevant features are input to the image processing model; the optimization module is used to determine the optimized image data of the collected data based on the output of the image processing model.
  • An aspect of the embodiments of this specification provides a device for optimizing image quality.
  • the device includes a processor and a memory.
  • the memory is used to store instructions. The corresponding operation of the image quality optimization method described above.
  • An aspect of the embodiments of this specification provides a computer-readable storage medium that stores computer instructions that, when executed by a processor, implement operations corresponding to the image quality optimization method described in any of the preceding items .
  • Fig. 1 is a schematic diagram of an exemplary image quality optimization system according to some embodiments of the present specification
  • Fig. 2 is a schematic diagram of hardware and/or software components of an exemplary computing device according to some embodiments of the present specification
  • Fig. 3 is a schematic diagram of hardware and/or software components of an exemplary mobile device according to some embodiments of the present specification
  • 4A and 4B are block diagrams of exemplary processing devices according to some embodiments of this specification.
  • Fig. 5 is an exemplary flowchart of image quality optimization according to some embodiments of the present specification.
  • Fig. 6A is an exemplary flow chart of generating noise features of an image based on image reconstruction according to some embodiments of the present specification
  • FIG. 6B is an exemplary flowchart of determining image quality features based on a feature processing model according to some embodiments of this specification
  • Fig. 7 is an exemplary flow chart of the training process of the image processing model according to some embodiments of the present specification.
  • Fig. 8 is an exemplary flow chart of obtaining training samples according to some embodiments of the present specification.
  • 9A and 9B are exemplary flowcharts of adding noise to qualified images to obtain sample images according to some embodiments of this specification.
  • FIG. 10 is an exemplary flowchart for obtaining an image processing model corresponding to the type of imaging device according to some embodiments of the present specification
  • Fig. 11 is an exemplary flowchart for obtaining an image processing model corresponding to an object type according to some embodiments of the present specification
  • Fig. 12 is another exemplary flowchart of image quality optimization according to some embodiments of the present specification.
  • FIG. 13A and FIG. 13B are the comparison results of image optimization using the optimization method of the embodiment of this specification and other methods according to some embodiments of this specification.
  • system used in this specification is a method for distinguishing different components, elements, parts, parts, or assemblies of different levels.
  • the words can be replaced by other expressions.
  • the system may include an imaging system.
  • the imaging system may include a single-modality system and/or a multi-modality system.
  • the term "modality" as used herein broadly refers to imaging or treatment methods or techniques for collecting, generating, processing, and/or analyzing imaging information of a subject or treating the subject.
  • Monomodal systems may include, for example, ultrasound imaging systems, X-ray imaging systems, computer tomography (CT) systems, magnetic resonance imaging (MRI) systems, ultrasound inspection systems, positron emission tomography (PET) systems, optical coherence tomography (OCT) imaging system, ultrasound (US) imaging system, intravascular ultrasound (IVUS) imaging system, near infrared spectroscopy (NIRS) imaging system, etc., or any combination thereof.
  • CT computer tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • OCT optical coherence tomography
  • US ultrasound
  • IVUS intravascular ultrasound
  • NIRS near infrared spectroscopy
  • Multimodal systems may include, for example, X-ray imaging-magnetic resonance imaging (X-ray MRI) systems, positron emission tomography-X-ray imaging (PET-X-ray) systems, single-photon emission computed tomography magnetic resonance imaging (SPECT- MRI) system, positron emission tomography (PET-CT) system, C-arm system, positron emission tomography magnetic resonance imaging (PET-MR) system, digital subtraction angiography-magnetic resonance imaging (DSA) -MRI) system, etc.
  • the medical system may include a treatment system.
  • the treatment system may include a treatment planning system (TPS), image-guided radiation therapy (IGRT), and the like.
  • Image guided radiation therapy may include treatment equipment and imaging equipment.
  • the treatment device may include a linear accelerator, a cyclotron, a synchrotron, etc., which are configured to perform radiotherapy on a subject.
  • the processing equipment may include accelerators of various particle types, for example, photons, electrons, protons, or heavy ions.
  • the imaging equipment may include an MRI scanner, a CT scanner (for example, a cone beam computed tomography (CBCT) scanner), a digital radiology (DR) scanner, an electronic portal imaging device (EPID), and the like.
  • CBCT cone beam computed tomography
  • DR digital radiology
  • EPID electronic portal imaging device
  • the subject may include living organisms and/or non-living organisms.
  • the organism may be human, animal, plant or specific parts, organs and/or tissues thereof.
  • the subject may include the head, neck, chest, heart, stomach, blood vessels, soft tissues, tumors, nodules, etc., or any combination thereof.
  • the subject may be a man-made composition of organic and/or inorganic substances with or without living.
  • imaging object or “subject”
  • the term "Imaging target” or "subject” can be used interchangeably.
  • the representation of an object for example, a patient, a subject, or a part thereof
  • an object for example, a patient, a subject, or a part thereof
  • organs and/or tissues e.g., heart, liver, lungs
  • the image including the representation of the object may be simply referred to as the image of the object or the image including the object.
  • the operation performed on the representation of the object in the image may be simply referred to as the operation on the object.
  • the segmentation of a part of the image including the representation of the organ and/or tissue in the image may be simply referred to as the segmentation of the organ and/or tissue.
  • Fig. 1 is a schematic diagram of an exemplary image quality optimization system according to some embodiments of the present specification.
  • the image quality optimization system 100 may include an imaging device 110, a processing device 120, a terminal 130, a network 140, and a storage device 150.
  • the various components in the image quality optimization system 100 can be connected in a variety of ways.
  • the imaging device 110 and the processing device 120 may be connected through the network 140 or directly (as shown by the dotted arrow connecting the imaging device 110 and the processing device 120 in FIG. 1).
  • the storage device 150 and the processing device 120 may be directly connected or connected through the network 140.
  • the terminal 130 and the processing device 120 may be connected through the network 140, or may be directly connected (as shown by the dotted arrow connecting the terminal 130 and the processing device 120 in FIG. 1).
  • the imaging device 110 can scan an imaging object located in a scanning area and generate imaging data related to the imaging object (may also be referred to as "raw data” or “scan data”).
  • the imaging object may include a biological object (for example, a human body, an animal, etc.), a non-biological object (for example, a phantom), and the like.
  • the imaging device 110 may be a computed tomography (CT) imaging device, a positron emission computed tomography (PET) imaging device, a magnetic resonance (MRI) imaging device, a single photon emission computed tomography (SPECT) imaging device Equipment, emission computed tomography (ECT) imaging equipment, ultrasound (UI) imaging equipment, digital X-ray (DR) imaging equipment, etc. or any combination thereof (for example, PET-CT imaging equipment, PET-MRI imaging equipment, etc.).
  • CT computed tomography
  • PET positron emission computed tomography
  • MRI magnetic resonance
  • SPECT single photon emission computed tomography
  • the imaging device 110 may include a gantry 111, a detector 112, an imaging area 113, and a scanning bed 114.
  • the frame 111 can support the detector 112.
  • the imaging object can be placed on the scanning bed 114 and enter the imaging area 113 to be scanned.
  • the detector 112 may include one or more detector units.
  • the detector unit may include a single row of detectors and/or multiple rows of detectors.
  • the detector unit may include a scintillation detector (for example, a cesium iodide detector) or other detectors.
  • the frame 111 can rotate clockwise or counterclockwise about the axis of rotation of the frame.
  • the imaging device 110 may further include an X-ray tube (not shown in the figure), and the X-ray tube may rotate together with the gantry 111.
  • the X-ray tube can emit radiation beams (for example, X-rays) to the imaging object, which is attenuated by the imaging object and detected by the detector 112 to generate imaging data.
  • the processing device 120 may process data and/or information acquired from the imaging device 110, the terminal 130, and/or the storage device 150. For example, the processing device 120 may generate an image based on the imaging data acquired by the imaging device 110. For another example, the processing device 120 may optimize the quality of the generated image. Specifically, the processing device 120 may determine at least one relevant feature of the image to be processed. The at least one related feature may include one or more of the acquisition parameter feature related to the image, the imaging parameter feature related to the image, the imaging object feature, the quality feature of the image, and the like. The processing device 120 may input the image and at least one related feature into the image processing model. The processing device 120 may determine an optimized image of the image based on the output of the image processing model. In some embodiments, the processing device 120 may be a single server or a group of servers. The server group can be centralized or distributed. In some embodiments, the processing device 120 may be local or remote.
  • the terminal 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, etc., or any combination thereof.
  • the terminal 130 may interact with other components in the image quality optimization system 100 through the network 140.
  • the terminal 130 may send one or more control instructions to the imaging device 110 to control the imaging device 110 to scan the imaging object according to the instructions.
  • the mobile device 131 may include, but is not limited to, a smart phone, a handheld game console, smart glasses, a smart watch, a wearable device, a virtual display device, a display enhancement device, etc., or any combination thereof.
  • the terminal 130 may be part of the processing device 120.
  • the terminal 130 may be integrated with the processing device 120 as a console of the imaging device 110.
  • a user/operator for example, a doctor
  • the image quality optimization system 100 can control the operation of the imaging device 110 through the console, for example, to scan an imaging object.
  • the network 140 may include any suitable network capable of facilitating the exchange of information and/or data of the image quality optimization system 100.
  • network 140 may include wired networks, fiber optic networks, telecommunications networks, local area networks, wireless local area network (WLAN), metropolitan area network (MAN), public switched telephone network (PSTN), Bluetooth (TM) network, the ZigBee TM network, a near field communication ( NFC) network, etc. or any combination thereof.
  • the network 140 may include one or more network access points.
  • the network 140 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points. Through the network access points, one or more components of the image quality optimization system 100 may be connected to the network 140 to exchange data and /Or information.
  • the storage device 150 may store data (for example, scan data of an imaging object), instructions, and/or any other information.
  • the storage device 150 may store data obtained from the imaging device 110, the terminal 130, and/or the processing device 120.
  • the storage device 150 may store the treatment plan obtained from the imaging device 110, and scan data of the imaging object. Wait.
  • the storage device 150 may store data and/or instructions that can be executed or used by the processing device 120 to perform the exemplary methods described in this specification.
  • the storage device 150 may include mass storage, removable storage, volatile read-write storage, read-only storage (ROM), etc., or any combination thereof.
  • Mass storage can include magnetic disks, optical disks, solid state drives, and mobile storage.
  • Removable storage may include flash drives, floppy disks, optical disks, memory cards, ZIP disks, tapes, and so on.
  • Volatile read-write memory may include random access memory (RAM).
  • RAM can include dynamic random access memory (DRAM), double data rate synchronous dynamic random access memory (DDR-SDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), zero capacitance random access memory Access memory (Z-RAM), etc.
  • ROM can include mask read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), optical disk read-only memory Storage (CD-ROM), digital versatile disc, etc.
  • the storage device 150 may be implemented by a cloud platform.
  • the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, etc. or any combination thereof.
  • the storage device 150 may be connected to the network 140 to implement communication with one or more components (for example, the processing device 120, the terminal 130, etc.) in the image quality optimization system 100.
  • One or more components in the image quality optimization system 100 can read data or instructions in the storage device 150 via the network 140.
  • the storage device 150 may be a part of the processing device 120, or may be independent, and directly or indirectly connected to the processing device 120.
  • the image quality optimization system 100 may include one or more additional components and/or one or more components of the image quality optimization system 100 described above may be omitted. Additionally or alternatively, two or more components of the image quality optimization system 100 may be integrated into a single component. The components of the image quality optimization system 100 may be implemented on two or more sub-components.
  • FIG. 2 is a schematic diagram of exemplary hardware and/or software components of an exemplary computing device 200 according to some embodiments of this specification.
  • the computing device 200 may be used to implement any component of the image quality optimization system 100 as described herein.
  • the processing device 120 and/or the terminal 130 may be implemented on the computing device 200 via its hardware, software program, firmware, or a combination thereof, respectively.
  • the computer functions related to the image quality optimization system 100 described herein can be implemented in a distributed manner on multiple similar platforms to distribute the processing load.
  • the computing device 200 may include a processor 210, a memory 220, an input/output (I/O) 230, and a communication port 240.
  • I/O input/output
  • the processor 210 may execute computer instructions (for example, program code) and perform the functions of the processing device 120 according to the technology described herein.
  • the computer instructions may include performing specific functions described herein, such as routines, programs, components, parts, data structures, procedures, modules, and functions.
  • the processor 210 may execute instructions obtained from the terminal 130 and/or the storage device 150.
  • the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application-specific integrated circuit (ASIC), an application-specific instruction set processor ( ASIP), central processing unit (CPU), graphics processing unit (GPU), physical processing unit (PPU), microcontroller unit, digital signal processor (DSP), field programmable gate array (FPGA), high-end RISC machine (ARM), Programmable Logic Device (PLD), any circuit or processor capable of performing one or more functions, etc., or any combination thereof.
  • RISC reduced instruction set computer
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction set processor
  • CPU central processing unit
  • GPU graphics processing unit
  • PPU physical processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ARM Programmable Logic Device
  • PLD Programmable Logic Device
  • processors of the computing device 200 may also include multiple processors, so the steps and/or method steps performed by one processor described in this specification may also be executed by multiple processors jointly or separately.
  • the processor of the computing device 200 performs both operation A and operation B in this specification, it should be understood that operation A and operation B can also be used by two or more different processors in the computing device 200. Performed separately (for example, the first processor performs operation A and the second processor performs operation B, or the first processor and the second processor jointly perform operations A and B).
  • the memory 220 may store data/information obtained from the imaging device 110, the processing device 120, the storage device 150, the terminal 130, and/or any other components of the image quality optimization system 100.
  • the memory 220 may include mass memory, removable memory, volatile read-write memory, read-only memory (ROM), etc., or any combination thereof.
  • the memory 220 may store one or more programs and/or instructions to perform the exemplary methods described in this specification.
  • the input/output 230 may input and/or output signals, data, information, and the like. In some embodiments, the input/output 230 may enable user interaction with the processing device 120. In some embodiments, the input/output 230 may include an input device and an output device. Exemplary input devices may include a keyboard, mouse, touch screen, microphone, etc., or any combination thereof. Exemplary output devices may include display devices, speakers, printers, projectors, etc., or any combination thereof. Examples of display devices may include liquid crystal displays (LCD), light emitting diode (LED) based displays, flat panel displays, curved screens, television equipment, cathode ray tubes (CRT), touch screens, etc., or any combination thereof.
  • LCD liquid crystal displays
  • LED light emitting diode
  • CRT cathode ray tubes
  • the communication port 240 may be connected to a network (for example, the network 140) to facilitate data communication.
  • the communication port 240 may establish a connection between the processing device 120 and the imaging device 110, the terminal 130, and/or the storage device 150.
  • the connection can be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections.
  • Wired connections may include, for example, cables, optical cables, telephone lines, etc., or any combination thereof.
  • the wireless connection may include, for example, a Bluetooth link, a Wi-Fi link, a WiMax link, a WLAN link, a Zigbee link, a mobile network link (for example, 3G, 4G, 5G, etc.), etc., or a combination thereof.
  • the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, and the like.
  • the communication port 240 may be a specially designed port.
  • the communication port 240 may be designed according to the Digital Imaging and Communication in Medicine (DICOM) protocol.
  • DICOM Digital Imaging and Communication in Medicine
  • Fig. 3 is a schematic diagram of hardware and/or software components of an exemplary mobile device according to some embodiments of the present specification.
  • one or more components of the image quality optimization system 100 may be implemented on the mobile device 300.
  • the mobile device 300 may include a communication platform 310, a display 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an input/output 350, a memory 360, and a storage 390.
  • any other suitable components including but not limited to a system bus or a controller (not shown), may also be included in the mobile device 300.
  • the mobile operating system 370 eg, iOS TM , Android TM , Windows Phone TM
  • one or more application programs 380 may be loaded from the memory 390 into the memory 360 for execution by the CPU 340.
  • the application program 380 may include a browser or any other suitable mobile application program for receiving and rendering information related to image processing or other information from the processing device 120.
  • the user interaction with the information stream may be implemented through the input/output 350 and provided to the processing device 120 and/or other components of the image quality optimization system 100 through the network 140.
  • a computer hardware platform can be used as a hardware platform for one or more components described herein.
  • the hardware elements, operating systems, and programming languages of such computers are conventional in nature, and it is assumed that those skilled in the art are sufficiently familiar to adapt those techniques to generate images as described herein.
  • a computer with user interface components can be used to implement a personal computer (PC) or any other type of workstation or terminal. If programmed properly, the computer can also be used as a server. It is believed that those skilled in the art are familiar with the structure, programming and general operation of such computer equipment, therefore, the drawings should be self-explanatory.
  • 4A and 4B are block diagrams of exemplary processing devices according to some embodiments of this specification.
  • the processing device 120 a and the processing device 120 b may be embodiments of the processing device 120.
  • the processing device 120a and the processing device 120b may be implemented on the same device or separate devices.
  • both the processing device 120a and the processing device 120b may be implemented on the computing device 200.
  • the processing device 120a may be implemented on the mobile device 300, and the processing device 120b may be implemented on the computing device 200.
  • the processing device 120a and/or the modules in the processing device 120b may be connected or communicated with each other by a wired connection or a wireless connection.
  • the processing device 120a may include an acquisition module 410, a determination module 420, an input module 430, and an optimization module 440.
  • the acquisition module 410 may be configured to acquire data and/or information from one or more components of the image quality optimization system 100.
  • the obtaining module 410 may be used to obtain images to be processed from storage devices or terminals described elsewhere in this specification.
  • the image to be processed may refer to an image (for example, 2D image, 3D image, 4D image) or raw image data (for example, raw CT data acquired by the imaging device 110).
  • the obtaining module 410 may obtain the image processing model from a storage device as described elsewhere in this specification.
  • the image processing model may be a machine learning model (for example, a deep learning model).
  • the acquisition module 410 may select the image processing model according to the relevant characteristics of the image, the type of imaging device, the type of reconstruction algorithm, the type of imaging object, the type of imaging target, or the optimization target.
  • the image to be processed and the image processing model please refer to other descriptions in this specification (for example, steps 510 and 530 and their descriptions).
  • the determining module 420 may be used to determine at least one related feature of the image, the at least one related feature includes: at least one of an image-related acquisition parameter feature, an image-related imaging parameter feature, an imaging object feature, and an image quality feature.
  • the determination module 420 may determine the quality characteristics according to optimization goals (for example, reducing noise, reducing artifacts, improving resolution, improving contrast, reducing motion, etc.). More descriptions of related features can be found elsewhere in this specification (for example, step 520 and its description).
  • the input module 430 may be used to input data to be processed for the image processing model.
  • the input module 430 is used to input the image and at least one related feature into the image processing model.
  • the input module 430 may preprocess the image and input the preprocessed image and related features into the image processing model.
  • the optimization module 440 may be used to determine an optimized image of the image based on the output of the image processing model. In some embodiments, the optimization module 440 may perform post-processing on the output of the image processing model to determine an optimized image.
  • the processing device 120b may include an acquisition module 450 and a training module 460.
  • the acquisition module 450 may be used to acquire data and/or information from one or more components of the image quality optimization system 100.
  • the acquiring module 450 may acquire multiple training samples and their respective corresponding standard images from the storage device described elsewhere in this specification.
  • the obtaining module 450 may obtain multiple qualified images from storage devices described elsewhere in this specification.
  • the acquisition module 410 may preprocess multiple qualified images to generate multiple training samples.
  • the training module 460 may be configured to obtain an image processing model based on the training process.
  • the training module 460 may train the initial image processing model based on multiple training samples. For example, the training module 460 may iteratively train the initial image processing model until the termination condition is satisfied. In response to the termination condition being satisfied, the image processing model can be finalized.
  • the termination condition may be related to the value of the loss function.
  • the loss function can be positively correlated with the quality weight, and the quality weight can be correlated with the quality characteristics of the sample image. Different training samples can correspond to different quality weights.
  • the modules shown in FIG. 4 can be implemented in various ways.
  • the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
  • the hardware part can be implemented using dedicated logic;
  • the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
  • an appropriate instruction execution system such as a microprocessor or dedicated design hardware.
  • the above-mentioned methods and systems can be implemented using computer-executable instructions and/or included in processor control codes, for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware Such codes are provided on a programmable memory or a data carrier such as an optical or electronic signal carrier.
  • the system and its modules in this specification can not only be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. It may also be implemented by software executed by various types of processors, or may be implemented by a combination of the above hardware circuit and software (for example, firmware).
  • processing device 120 is only for convenience of description, and does not limit this specification within the scope of the examples listed. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form a subsystem to connect with other modules without departing from this principle. Two or more modules can be combined into one module, and any one module can be divided into two or more units.
  • the processing device 120a and/or the processing device 120b may share two or more modules.
  • the processing device 120a and the processing device 120b may share a common acquisition module, that is, the acquisition module 410 and the acquisition module 450 may be implemented by a single module.
  • the processing device 120a and/or the processing device 120b may include one or more additional modules, such as a storage module (not shown) for storing data. In some embodiments, the processing device 120a and the processing device 120b may be integrated into a single processing device.
  • Fig. 5 is an exemplary flowchart of image quality optimization according to some embodiments of the present specification.
  • the process 500 may be executed by the processing device 120a or other processing devices.
  • the process 500 may be executed as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150, the memory 220, and/or the memory 390).
  • the processing device 120a (for example, the processor 210, the CPU 340, and/or one or more modules shown in FIG. 4A) or other processing device may execute the instruction set, and when the instruction set is executed, the processing device 120a or other processing device may It is configured to execute the process 500.
  • the operation of the illustrated flow presented below is for illustration.
  • the process 500 may be completed using one or more additional operations not described and/or one or more operations not discussed.
  • the operation sequence of the process 500 shown in FIG. 5 and the content described below do not limit the scope of this specification.
  • the processing device 120a may acquire the image to be processed.
  • the image to be processed may refer to an image (for example, 2D image, 3D image, 4D image) and/or original image data (for example, original CT data acquired by the imaging device 110).
  • the original image data includes imaging data.
  • the image to be processed may be an image generated based on imaging data obtained after the imaging device 110 scans the imaging object.
  • the image to be processed may be an image obtained by image reconstruction based on the imaging data.
  • the reconstruction algorithm may include an inverse filtering (FBP) algorithm, algebraic reconstruction technology (ART), a local reconstruction algorithm (LocalRA), an iterative reconstruction algorithm, etc., or any combination thereof.
  • FBP inverse filtering
  • ART algebraic reconstruction technology
  • LocalRA local reconstruction algorithm
  • iterative reconstruction algorithm etc., or any combination thereof.
  • the imaging device 110 may be an imaging device of a different modality, and correspondingly, the image to be processed may also be an image of a different modality.
  • the imaging device 110 may be a computer tomography (CT) imaging device, and correspondingly, the image to be processed is a computer tomography (CT) image.
  • CT computer tomography
  • MRI magnetic resonance
  • the image to be processed is a magnetic resonance (MRI) image.
  • the embodiments of this specification do not impose any restrictions on the modalities of processing images.
  • the image to be processed may also be an X-ray image, a positron emission tomography (PET) image, a single photon emission computed tomography (SPECT) image, etc., or any combination thereof.
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • the image to be processed may be an image with lower image quality.
  • the processing to be processed may include, but is not limited to, images containing artifacts, images containing noise, images with low signal-to-noise ratio (SNR), low-contrast images, low-resolution images, or images containing motion (such as , The image in which the pixels in the image move) and so on.
  • the image to be processed may be generated in advance and stored in a storage device (for example, the storage device 150, the storage 220, and/or the storage 390) described elsewhere in this specification.
  • the processing device 120a (for example, the acquiring module 410) may acquire the image to be processed from the storage device.
  • the image to be processed may be generated by the processing device 120a.
  • the processing device 120a may instruct the imaging device 110 to scan an imaging object or imaging target to obtain imaging data, and the processing device 120a may generate an image to be processed based on the imaging data.
  • the acquiring module 410 may acquire the image to be processed from the imaging device 110.
  • the imaging device 110 generates imaging data based on scanning an imaging object or an imaging target, and determines an image to be processed based on the imaging data.
  • the obtaining module 410 may call related interfaces to obtain the image to be processed from the external device.
  • the processing device 120a may determine at least one relevant feature of the image.
  • the at least one related feature may be a feature directly and/or indirectly related to the image to be processed.
  • the at least one related feature may include an acquisition parameter feature related to an image, an imaging parameter feature related to an image, an imaging object feature, an image quality feature, etc., or any combination thereof.
  • the acquisition parameter feature related to the image may be a feature related to the acquisition process of imaging data corresponding to the image, for example, a feature related to the imaging device 110.
  • the acquisition parameter characteristics can include voltage parameters, current parameters, scanning signal filtering parameters, detector size, detector response characteristics to signals, detector sensitivity characteristics to signal movement, The intensity of noise generated by the detector, etc. or any combination thereof.
  • the voltage parameter may be the value of the voltage applied between the cathode and the anode of the X-ray tube. Voltage parameters can reflect the penetration of X-rays. Generally, the higher the voltage parameters under the same conditions, the stronger the penetration of X-rays and the worse the contrast of low-density images, and vice versa.
  • the current parameter may be the current value on the cathode of the X-ray tube.
  • the current parameter can reflect the radiation dose received by the imaging object during scanning. The larger the current parameter, the more free electrons generated by the cathode, and the greater the number of free electrons bombarding the anode target of the X-ray tube, the greater the density of X-ray photons ultimately generated, and the greater the radiation dose received by the imaging object . Since the radiation dose will affect the image quality of CT images, the current parameters can also reflect the image quality of the images.
  • the filtering parameter of the scan signal may be a parameter related to the filter between the X-ray tube and the detector.
  • the filter between the X-ray tube and the detector can filter out X-rays that are not helpful for imaging and reduce the radiation dose of the human body; or according to the shape of the human body, appropriately filter out the X-rays at the edge of the human body, so that the X-rays passing through the human body are attenuated. X-rays are more uniform. Therefore, the filtering parameters of the scan signal may reflect the spatial distribution and/or intensity distribution of the X-rays received by the imaging object.
  • the size of the detector may include the size of the detector itself and/or the size of the receiving unit in the detector.
  • the response characteristic of the detector to the signal may include a response gain curve and/or a nonlinear characteristic of the spectral response.
  • the response gain curve may reflect the response ratio of the detector to X-rays of different intensities.
  • the non-linear characteristics of the spectral response may reflect the absorption of X-rays with different energy spectra and the beam hardening effect of the detector.
  • the sensor's sensitivity to signal movement may include focus jitter response sensitivity.
  • the focus jitter response sensitivity may reflect the change of X-rays detected by the detector due to the focus jitter of the X-ray tube. For example, due to the jitter of the focus, the position of the focus is slightly displaced, and the X-rays detected by the detector will also change slightly. This change will affect the quality of the imaging data and thus the image quality.
  • the intensity of the noise generated by the detector may be the intensity of image noise caused by the static current generated by the circuit in the detector.
  • the intensity of the noise generated by the detector includes the electronic noise of the detector, and the electronic noise of the detector is the electronic noise generated due to the inherent physical limitations of the system.
  • the electronic noise in the photomultiplier tube of the detector is another example.
  • electronic noise in the data acquisition system is another example.
  • the acquisition parameter characteristics may also include the model of the imaging device, the type of the detector, the type of the X-ray tube (for example, the target surface type of the anode target included in the X-ray tube), and the type of the imaging device.
  • Mechanical parameters for example, the distance from the radiation source to the center of rotation, the distance from the radiation source to the detector, etc.), etc. or any combination thereof.
  • the imaging parameter feature related to the image may be a parameter related to the image reconstruction process.
  • the imaging parameter characteristics may include the number of pixels, pixel pitch, reconstruction thickness, reconstruction pitch, convolution kernel parameters, etc., or any combination thereof.
  • the number of pixels refers to the number of all pixels included in the image (for example, 512*512, 1024*1024, etc.).
  • Pixel pitch refers to the distance between two adjacent pixels in an image.
  • the reconstruction thickness refers to the thickness of the scan layer of CT, that is, the thickness of a layer of image. In some embodiments, different reconstruction thicknesses can be set according to the scanning location, the location of the lesion, or the needs of the diagnosing doctor. In some embodiments, the reconstruction thickness may be 10mm, 7mm, 5mm, 1mm, etc.
  • the reconstruction distance refers to the distance between the scan layers of CT, that is, the distance between two layers of images.
  • the convolution kernel parameters may include the number of convolution kernels, the size of the convolution kernel, and so on.
  • the characteristics of the imaging object may include the figure of the imaging object (e.g., height, weight, body shape), the age of the imaging object, the gender of the imaging object, the body density of the imaging object, the attenuation distribution of the imaging object to X-rays, The density of the imaging target (for example, organs, tissues) in the imaging object, the attenuation distribution of the imaging target to X-rays, the density of each layer in the multi-layer scanning layer of the imaging object, and the pair of X-rays in each layer of the multi-layer scanning layer of the imaging object The attenuation distribution of the imaging target, the density of each layer in the multi-layer scanning layer of the imaging target, and the X-ray attenuation density of each layer in the multi-layer scanning layer of the imaging target, etc., or any combination thereof.
  • the figure of the imaging object e.g., height, weight, body shape
  • the age of the imaging object e.g., the age of the imaging object, the gender of the imaging object, the
  • the characteristics of the imaging object will directly or indirectly affect the attenuation of X-rays, thereby affecting the image quality.
  • the X-ray attenuation corresponding to a fat imaging object is relatively large, and accordingly, the image noise may be relatively large;
  • the X-ray attenuation corresponding to a thin imaging object is relatively small, correspondingly, the image The noise may be relatively small.
  • different imaging targets for example, different organs
  • have different effects on the attenuation of X-rays and accordingly, have different effects on image quality.
  • the determining module 420 may obtain the characteristics of the imaging object based on historical data. For example, the determination module 420 determines the density of the imaging target (for example, organ, tissue) in the imaging target based on historical images of the imaging target (for example, historically captured CT images, X-ray images, MRI images, PET-CT images, etc.). In some embodiments, the determining module 420 may obtain the characteristics of the imaging object based on real-time collected data. For example, the determining module 420 may be based on images collected in real time by the camera 115 or data collected by other sensors (for example, the weight set on the scanning bed 114). Calculate), determine the figure of the imaging object, the body density of the imaging object, etc. In some embodiments, the determining module 420 may determine the characteristics of the imaging object based on information input by the user (for example, a doctor). For example, the user may input through the terminal 130 The age of the imaging subject, the gender of the imaging subject, etc.
  • the user may input through the terminal 130 The age of the imaging
  • the X-ray attenuation distribution represents the X-ray attenuation value distribution.
  • the determining module 420 may obtain the X-ray attenuation distribution of each layer in the imaging object or the multi-layer scanning layer of the imaging object through a positioning image (for example, a CT positioning image).
  • the determining module 420 may determine the X-ray attenuation distribution of each layer in the imaging object, imaging object, imaging object, or imaging object through the equivalent water film diameter. For example, the equivalent water film diameter is determined by the positioning image, and the equivalent water film diameter is converted into the attenuation distribution of X-rays through a standard conversion method.
  • the quality characteristics may include noise characteristics, artifact characteristics, motion characteristics, gray-scale characteristics, resolution, contrast, etc., or any combination thereof of the image.
  • the noise characteristics may include noise distribution, noise intensity, noise rate, etc., or any combination thereof.
  • the noise intensity may refer to the value of the noise pixel, which may reflect the noise amplitude in the noise pixel.
  • the global noise intensity can reflect the average noise intensity in the image or the weighted average noise intensity.
  • the noise distribution can reflect the distribution of different noise intensities or different noise amplitudes in the image (for example, the probability density of different noise intensities).
  • the noise distribution may be expressed in a manner such as a noise distribution diagram or a noise distribution function.
  • the determining module 420 may generate the noise distribution of the image based on image reconstruction. For the specific details of generating the noise distribution based on image reconstruction, please refer to FIG. 6A and its related description, which will not be repeated here.
  • the determining module 420 may also determine the noise distribution through a noise extraction algorithm.
  • the noise extraction algorithm may include noise statistical models (eg, Gaussian noise model, impulse noise model, Rayleigh noise model, exponentially distributed noise model, uniformly distributed noise model, and other random noise models) and/or noise The probability density function (PDF) corresponding to the statistical model.
  • PDF probability density function
  • the noise rate can reflect the degree of dispersion of the noise distribution.
  • the determination module 420 may determine the noise rate based on the variance and/or standard deviation of the noise distribution.
  • the processing device 120a may use an image block method, a filter method, a spatial sampling method, a Bayesian estimation algorithm, etc., or any combination thereof to determine the noise characteristics of the image.
  • the artifact of the image may refer to a part of the image that does not correspond to any part of the imaging object that actually exists, which may cause image distortion, image overlap, and image loss.
  • the artifact characteristics may include artifact distribution, artifact intensity, global artifact intensity, artifact rate, etc., or any combination thereof.
  • the artifact intensity can refer to the value of the pseudo pixel, which reflects the magnitude of the artifact in the pseudo pixel.
  • the global artifact strength can reflect the average artifact strength in the image or the weighted average artifact strength.
  • the artifact distribution can reflect the distribution of different artifact intensities in the image (for example, the probability density of different artifact intensities).
  • the artifact distribution can be expressed in a manner such as an artifact distribution map or an artifact distribution function.
  • the artifact rate can reflect the degree of dispersion of the artifact distribution.
  • the determination module 420 may identify artifacts in the image and determine the characteristics of the artifacts.
  • the processing device 120a may use a feature extraction algorithm to determine artifact features.
  • Feature extraction algorithms can include directional gradient histogram (HOG), local binary pattern (LBP) algorithm, scale-invariant feature transform matching (SIFT) algorithm, Haar-like algorithm, gray-level co-occurrence matrix method (GLCM) ), Hough transform, Fourier transform, Fourier shape descriptor method, shape parameter method, finite element method (FEM), rotation function and wavelet descriptor, etc.
  • HOG directional gradient histogram
  • LBP local binary pattern
  • SIFT scale-invariant feature transform matching
  • GLCM gray-level co-occurrence matrix method
  • Hough transform Fourier transform
  • Fourier shape descriptor method shape parameter method
  • FEM finite element method
  • rotation function and wavelet descriptor etc.
  • PDF probability density function
  • the motion feature may reflect the motion of pixels in the image.
  • the movement characteristics may include movement intensity, movement direction, movement position, etc. or any combination thereof.
  • the determining module 420 may obtain the motion vector distribution through a motion vector extraction algorithm.
  • the motion vector extraction algorithm may include a motion vector field adaptive search technique (MVFAST), an image block matching motion estimation algorithm, a pixel method, a feature method, etc., or any combination thereof.
  • MVFAST motion vector field adaptive search technique
  • the grayscale feature may include grayscale distribution and/or global grayscale distribution.
  • the grayscale distribution may reflect the distribution of grayscale values of pixels in the image.
  • the global grayscale distribution may reflect the overall distribution of the grayscale values of the pixels in the image (for example, the average grayscale value of the pixels and/or the weighted average grayscale value of the pixels).
  • the determination module 420 may use a histogram of directional gradient (HOG) and/or a local binary pattern (LBP) algorithm to determine the grayscale feature.
  • HOG histogram of directional gradient
  • LBP local binary pattern
  • the resolution may be the number of pixels contained in an image per inch. Generally, the higher the resolution of the image, the clearer the image.
  • the contrast may refer to the measurement of different brightness levels between the brightest white and the darkest black in the bright and dark regions of an image, that is, the magnitude of the grayscale contrast of an image.
  • the greater the contrast the clearer the image and the more vivid the color.
  • the determining module 420 may determine the above-mentioned quality characteristics based on a machine learning model. Specifically, the determining module 420 may input at least one of the acquisition parameter feature, the imaging parameter feature, and the imaging object feature into the feature processing model; and based on the output of the feature processing model, determine the quality feature of the image. For the specific details of determining the quality feature based on the machine learning model, please refer to FIG. 6B and its related description, which will not be repeated here.
  • the determining module 420 may determine different quality characteristics according to the image quality optimization target (or the application scenario of image quality optimization). For example, if the image quality optimization goal is denoising, the quality feature is noise feature; if the image quality optimization goal is de-artifacting, the quality feature is artifact feature; if the image quality optimization goal is motion correction, the quality feature is motion Features; if the image quality optimization goal is to improve resolution, the quality feature is resolution; if the image quality optimization goal is to improve the contrast of the image, then the quality feature is contrast; if the image quality optimization goal is scattering correction, the quality feature is false Shadow characteristics. Since the essence of scatter correction is to correct the actual collected data caused by the scattering of X-rays and the estimated ones.
  • Scattering refers to the extra rays produced by the refraction of X-rays on the surface of the detector or its internal structural parts. Due to the existence of scattering, artifacts will appear in the reconstructed image. It is understandable that the above examples are only the minimum quality features required to achieve the image quality optimization goal, and multiple quality features can also be selected according to actual needs to enhance the representation of image quality.
  • the image quality optimization target is contrast
  • the quality characteristics are contrast, resolution, and contrast.
  • the processing device 120a may input the image and at least one related feature into the image processing model.
  • the image processing model may be a pre-trained machine learning model for optimizing the image based on the image and/or at least one related feature, so as to achieve the optimization goal corresponding to the image.
  • the image processing model may be a deep learning model. Deep learning models can include deep neural network (DNN) models, multi-layer neural network (MLP) models, convolutional neural network (CNN) models, generative adversarial neural network (GAN) models, and/or deep convolutional encoding and decoding (DCED) ) Neural network model.
  • DNN deep neural network
  • MLP multi-layer neural network
  • CNN convolutional neural network
  • GAN generative adversarial neural network
  • DCED deep convolutional encoding and decoding
  • the processing device 120a may select the image processing model based on at least one of the acquisition parameter feature, the imaging parameter feature, and the imaging object feature. For example, the processing device 120 can train the image processing model corresponding to the collection parameter features by inputting sample images and sample collection parameter features. Correspondingly, when optimizing based on the image and its collection parameter features, the processing device 120 can acquire and collect the parameter features. Corresponding sample processing image. For another example, the processing device 120 trains the image processing model corresponding to the collection parameter feature and the imaging object feature by inputting the sample image, the sample collection parameter feature, and the sample imaging object feature.
  • the processing device 120 may obtain a sample processed image corresponding to the characteristics of the acquisition parameters and the characteristics of the imaging object. It is understandable that the processing device 120 may also perform other combinations based on related features, and obtain a corresponding sample processing model.
  • the feature based on the acquisition parameters and/or the feature of the imaging parameters can reflect the type of imaging device that acquires the image (that is, the modality of the imaging device, for example, CT equipment, MRI equipment, or PET equipment); based on the feature of the imaging object, it can reflect the type of imaging object in the image Or imaging target type.
  • the imaging object and/or imaging target may be simply referred to as "object”
  • the imaging object type and/or imaging target type may be simply referred to as "object type”.
  • the processing device 120a may select an image processing model corresponding to the object type based on the object type.
  • the image processing model corresponding to the object type (for example, the image processing model corresponding to "chest") may be trained based on multiple training samples (for example, sample chest images) related to the object type.
  • the processing device 120 may select an image processing model corresponding to the imaging device type based on the imaging device type.
  • the image processing model corresponding to the imaging device type may be trained based on multiple training samples (for example, sample CT images) related to the imaging device type.
  • multiple training samples for example, sample CT images
  • the imaging parameter characteristics may also reflect the image reconstruction algorithm.
  • the processing device 120 may select an image processing model corresponding to the reconstruction algorithm based on the reconstruction algorithm.
  • the image processing model corresponding to the reconstruction algorithm (for example, the image processing model corresponding to the iterative reconstruction algorithm) may be trained based on multiple training samples related to the reconstruction algorithm (for example, the sample iterative reconstruction image).
  • different reconstruction algorithms may introduce different types of noise and/or artifacts.
  • the processing device 120 may select an image processing model based on the type of noise, the type of artifacts, or the type of motion.
  • the image processing model corresponding to the noise type (for example, an image processing model corresponding to Gaussian noise) is trained based on a plurality of training samples (for example, sample Gaussian noise images) related to the noise type.
  • the image processing model corresponding to the artifact type (for example, the image processing model corresponding to the streak artifact) may be trained based on a plurality of training samples related to the artifact type (for example, sample streak artifact images).
  • the processing device 120 can be trained to obtain an image processing model corresponding to the movement type.
  • the processing device 120a may obtain the image processing model from a storage device (for example, the storage device 150) of the image quality optimization system 100 or from an external source through a network (for example, the network 140).
  • the image processing model may be trained in advance by a training device (for example, the processing device 120) and stored in the storage device of the image quality optimization system 100.
  • the processing device 120 can access the storage device and obtain an image processing model.
  • the image processing model may be generated by a training device (for example, the processing device 120) according to a training process (for example, processes 700, 1000, 1100, etc.).
  • the input module 430 may process at least one related feature, and input the processed feature into the image processing model.
  • the input module 430 may convert at least one relevant feature into a vector or matrix form, perform vector or matrix fusion (for example, addition, multiplication, splicing, etc.), and input the fused vector or matrix into the image processing model.
  • the processing device 120 may preprocess the image, and the preprocessing includes image resampling or image normalization. For example only, the processing device 120 may perform image resample on the image to determine a resampled image having a preset size. Then, the processing device 120 may normalize the resampled image so that the pixel (or voxel) value of the normalized image may be within a preset range (for example, [-1, 1]). Further, the processing device 120 may input the normalized image and at least one related feature into the image processing model.
  • the preprocessing includes image resampling or image normalization. For example only, the processing device 120 may perform image resample on the image to determine a resampled image having a preset size. Then, the processing device 120 may normalize the resampled image so that the pixel (or voxel) value of the normalized image may be within a preset range (for example, [-1, 1]). Further, the processing device
  • step 540 the processing device 120a (for example, the optimization module 440) determines an optimized image of the image based on the output of the image processing model.
  • the determining module 420 may determine different quality characteristics according to the image quality optimization target. Accordingly, the optimization module 440 can determine different optimized images. For example, if the goal of image quality optimization is to denoise or increase resolution, then the optimized image is an image with a lower noise level than the image to be processed (for example, an image after denoising/denoising) or an image with a higher resolution than the image to be processed . For another example, if the image quality optimization goal is to remove artifacts, the optimized image is an image with an artifact level lower than the image to be processed (for example, an image after artifact reduction/removal).
  • the optimization module 440 may determine the output of the image processing model as an optimized image. In some embodiments, after determining the optimized image, the optimization module 440 may further process the optimized image, for example, smoothing, adjusting resolution or contrast, adjusting brightness, and so on. In some embodiments, the optimization module 440 may also transmit the optimized image or the processed optimized image to the terminal 130 for display. In some embodiments, the optimization module 440 may also transmit the optimized image or the processed optimized image to the storage device 150 for storage.
  • the image processing model can better understand the image Relevant information, so as to better complete the image optimization goals.
  • FIG. 6A is an exemplary flow chart of generating noise characteristics of an image based on image reconstruction according to some embodiments of the present specification.
  • the process 600a may be executed by the processing device 120a or other processing devices.
  • the process 600a may be executed as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150, the memory 220, and/or the memory 390).
  • the processing device 120a (for example, the processor 210, the CPU 340, and/or one or more modules shown in FIG. 4A) may execute the instruction group, and when executing the instruction, the processing device 120a may be configured to execute the process 600a.
  • the operation of the illustrated flow presented below is for illustration.
  • the process 600a may be completed using one or more additional operations not described and/or one or more operations not discussed.
  • the operation sequence of the process 600a shown in FIG. 6A and the content described below do not limit the scope of this specification.
  • Step 610 The processing device 120a (determination module 420) obtains the initial noise characteristics of the collected data corresponding to the image.
  • the acquired data corresponding to the image may refer to the scan data (or may be referred to as "raw data” or "imaging data") acquired during the scanning phase.
  • the corresponding scan data may refer to the X-ray data detected by the detector 112.
  • the determining module 420 may obtain the initial noise characteristics in a variety of ways. In some embodiments, the determination module 420 may determine the initial noise characteristic (for example, the initial noise distribution) based on the acquisition signal of the acquired data. Specifically, the determining module 420 may acquire at least one acquisition signal intensity corresponding to the acquired data; determine at least one noise amplitude corresponding to the at least one acquisition signal intensity; and determine the initial noise characteristic of the acquired data based on the at least one noise amplitude.
  • the initial noise characteristic for example, the initial noise distribution
  • the collected signal strength may be the strength of the electrical signal collected by the detector 112.
  • the detector 112 can collect X-rays passing through the imaging object, and convert the received photons into electrical signals. Accordingly, the intensity of the electrical signal can represent the number of photons collected by the detector 112.
  • the determining module 420 may determine at least one noise amplitude corresponding to the intensity of at least one collected signal based on a statistical law.
  • the statistical law may be a statistical law formed by the interaction between X-rays and the detector 112.
  • the statistical law may include the relationship between the number of photons received by the detector 112 and noise.
  • the number of photons received by the detector 112 may be proportional to the expectation of noise (e.g., the noise amplitude or the variance of the noise amplitude).
  • the intensity of the collected signal can represent the number of photons collected by the detector 112. Therefore, the determining module 420 can determine at least one noise amplitude corresponding to the intensity of the at least one collected signal based on the statistical law.
  • step 620 the processing device 120a (for example, the determination module 420) performs image reconstruction based on the initial noise characteristics and the collected data, and generates the noise characteristics of the image.
  • the determining module 420 may perform image reconstruction through a reconstruction algorithm.
  • the reconstruction algorithm may include Fourier slice theorem algorithm, filtered back projection (FBP) algorithm, fan beam reconstruction algorithm, iterative reconstruction algorithm, analytical reconstruction algorithm, compressed sensing (CS)-based algorithm, etc., or any combination thereof.
  • image reconstruction requires reconstruction parameters (ie, imaging parameter features)
  • the determining module 420 may perform image reconstruction based on initial noise features (eg, initial noise distribution), acquired data, and imaging parameter features to generate images The noise characteristics (e.g., noise distribution).
  • the determining module 420 when the determining module 420 determines any one of the noise characteristics, correspondingly, it can determine other characteristics of the noise. For example, when the determining module 420 determines the noise distribution, it can obtain the noise intensity or the noise rate through a preset conversion relationship.
  • FIG. 6B is an exemplary flowchart of determining image quality features based on a feature processing model according to some embodiments of the present specification.
  • the process 600b may be executed by the processing device 120a or other processing devices.
  • the process 600b may be executed as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150, the memory 220, and/or the memory 390).
  • the processing device 120a (for example, the processor 210, the CPU 340, and/or one or more modules shown in FIG. 4A) may execute the instruction group, and when the instruction is executed, the processing device 120a may be configured to execute the process 600b.
  • the operation of the illustrated flow presented below is for illustration.
  • the process 600b may be completed using one or more additional operations not described and/or one or more operations not discussed.
  • the operation sequence of the process 600b shown in FIG. 6B and the content described below do not limit the scope of this specification.
  • step 630 the processing device 120a (for example, the determination module 420) inputs at least one of the acquisition parameter feature, the imaging parameter feature, and the imaging object feature into the feature processing model.
  • the feature processing model may be a pre-trained machine learning model.
  • the feature processing model may include a Convolutional Neural Networks (CNN) model, support vector machine, naive Bayes, decision tree, and so on.
  • CNN Convolutional Neural Networks
  • the initial feature processing model can be trained based on multiple reference quality features corresponding to multiple sample images and multiple sample images (reference quality features are used as labels corresponding to the sample images) to obtain the feature processing model.
  • the sample image may be a medical image of any modality of the imaging object, such as a CT image, an X-ray image, an MR image, a PET-CT image, and the like.
  • the sample image may also be a posture image of the imaging object.
  • the front image and side image of the patient taken by the camera can be a quality feature map (for example, a reference noise distribution map, a reference artifact distribution map, a reference motion vector distribution map), The quality feature matrix and other methods are expressed.
  • a quality feature map for example, a reference noise distribution map, a reference artifact distribution map, a reference motion vector distribution map
  • multiple reference quality features corresponding to multiple sample images and multiple sample images can be input to the initial feature processing model, and the initial feature processing model can process multiple sample images and determine their corresponding sample quality features. . Further, based on the difference between the sample quality feature and the reference quality feature, the model parameters can be updated iteratively until the training is completed. For example, the value of the loss function can be determined based on the difference between the sample quality feature and the reference quality feature, and the model parameters can be further adjusted to make the loss function value as small as possible until the training is completed.
  • step 640 the processing device 120a (for example, the determining module 420) determines the quality feature of the image based on the output of the feature processing model.
  • the quality feature of the image to be processed output by the model can be represented by a quality feature map (for example, a noise distribution map, an artifact distribution map, a motion vector distribution map), a quality feature matrix, etc. .
  • Fig. 7 is an exemplary flowchart of the training process of the image processing model shown in some embodiments of the present specification.
  • the process 700 may be executed online or offline by the processing device 120b or other processing devices.
  • the process 700 may be implemented as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150, the memory 220, and/or the memory 390).
  • the processing device 120b (for example, the processor 210, the CPU 340, and/or one or more modules shown in FIG. 4B) may execute the instruction group, and when the instruction is executed, the processing device 120b may be configured to execute the process 700.
  • the operation of the illustrated flow presented below is for illustration.
  • the image processing model described in operation 530 of FIG. 5 may be an image processing model obtained according to the process 700.
  • step 710 the processing device 120b (for example, the acquiring module 450) acquires multiple training samples and multiple standard images corresponding to the multiple training samples.
  • each of the plurality of training samples includes a sample image and at least one sample-related feature of the sample image.
  • the sample image may be related to the training target of the image processing model, that is, related to the application scenario of the image processing model. For example, if the application scenario of the image processing model is denoising, the sample image is an image after adding noise. For another example, if the application scene of the image processing model is to remove artifacts, the sample image is an image after adding artifacts. For another example, if the application scenario of the image processing model is to increase the resolution, the sample image is an image after the resolution is reduced.
  • multiple training samples may correspond to multiple quality levels (e.g., multiple noise intensities, multiple artifact intensities).
  • multiple training samples may be generated in advance and stored in a storage device (for example, the storage device 150 or an external database) disclosed elsewhere in this specification.
  • the processing device 120 may directly obtain multiple training samples from the storage device.
  • multiple sample images and/or multiple standard images may be determined based on historical data.
  • the historical image corresponding to the imaging object for example, historical CT image
  • the sample image can be processed (for example, denoising, removing artifacts, improving resolution, processing methods and image processing model training).
  • the image obtained after the target correlation is used as the standard image.
  • At least a part of the plurality of training samples may be generated by the processing device 120.
  • the processing device 120 may preprocess at least one qualified image to obtain multiple sample images and multiple standard images.
  • a qualified image can refer to an image whose quality characteristics meet the quality requirements.
  • FIG. 8 and related descriptions please refer to FIG. 8 and related descriptions, which will not be repeated here.
  • the at least one sample-related feature may include: a sample acquisition parameter feature related to a sample image, a sample imaging parameter feature related to a sample image, a sample imaging object feature, a sample quality feature of a sample image, etc., or any combination thereof .
  • the sample quality features include sample noise features, sample artifact features, sample motion features, sample grayscale features, sample resolution, sample contrast, etc., or any combination thereof of the sample image.
  • the at least one sample-related feature please refer to the at least one related feature of the image in step 520, which will not be repeated here.
  • each of the plurality of training samples further includes a weight of each of at least one sample-related feature.
  • step 720 the processing device 120b (for example, the training module 460) trains a preliminary image processing model based on a plurality of training samples and a plurality of standard images, and determines an image processing model.
  • multiple training samples and multiple standard images can be processed, and the processed multiple training samples and Multiple standard images are input to the preliminary image processing model.
  • multiple sample images, sample-related features of multiple sample images, and multiple standard images can be converted into a first matrix, a second matrix, and a third matrix, respectively; the first matrix, the second matrix, and/or the third matrix can be merged.
  • Matrix input the fused matrix into the preliminary image processing model.
  • fusion may include, but is not limited to: matrix addition, matrix multiplication, matrix splicing, and the like.
  • the fusion can also be performed after input to the preliminary image processing model. For example, fusion is performed in the input layer of the preliminary image processing model.
  • sample acquisition parameter feature for example, sample acquisition parameter feature, sample imaging parameter feature, sample imaging object feature
  • sample imaging parameter feature for example, sample acquisition parameter feature, sample imaging parameter feature, sample imaging object feature
  • these discrete values can be converted into a fourth matrix, which is used as the input of the intermediate layer of the preliminary image processing model.
  • the preliminary image processing model may include a deep learning model such as a DNN model, an MLP model, a CNN model, a GAN model, and a DCED network model.
  • the preliminary image processing model may include at least one preliminary model parameter.
  • the at least one preliminary model parameter may be a default setting of the image quality optimization system 100, or may be adjusted based on different situations. Taking the CNN model as an example, at least one preliminary model parameter may include the number of convolution layers, the number of convolution kernels, the size of the convolution kernel, the step size, the parameters of each convolution layer, etc., or any combination thereof.
  • the preliminary image processing model can process multiple training samples and determine their corresponding sample optimized images. Further, it is possible to optimize images and multiple standard images based on multiple samples, and iteratively update the model parameters until the training is completed. For example, the value of the loss function can be determined based on the difference between the optimized image and the standard image based on multiple samples, and the model parameters can be further adjusted to make the loss function value as small as possible until the training is completed.
  • the loss function of the image processing model may be positively correlated with the quality weight; the quality weight is determined based on the sample quality characteristics. In some embodiments, the quality weight may be related to the quality level of the sample image. For example, the loss function in the current iteration can be determined according to the following equation (1):
  • L represents the loss function
  • n represents the number of multiple training samples
  • x i represents the sample image in the i-th training sample (can be called the "i-th sample image")
  • f(x i ) represents the same as the i-th
  • y i represents the i-th qualified image corresponding to the i-th sample image (as a standard image)
  • l(f(x i ), y i ) represents the The loss related to the difference between the i predicted optimized image and the i-th qualified image (for example, square loss, absolute value loss)
  • w i represents the i-th quality weight corresponding to the i-th training sample.
  • the loss function can be further normalized according to the following equation (2):
  • the quality weight can be determined according to one or more sample quality characteristics of the sample image in the training sample. For example, for each of one or more sample quality features, the processing device 120b may determine a quality sub-weight corresponding to the sample quality feature. Further, the processing device 120b may determine the quality weight based on the quality sub-weight corresponding to the sample quality feature. For example, the processing device 120 may determine the integrated result of one or more quality sub-weights (for example, weighted sum, product, average) as the quality weight.
  • quality weights and/or quality sub-weights may be negatively correlated or positively correlated with sample quality characteristics, so as to balance the influence of different samples on training the model.
  • multiple training samples may include sample images with higher sample noise intensity and sample images with lower sample noise intensity.
  • optimizing samples with higher sample noise intensity can easily reduce the loss function. Therefore, in order to balance the impact of samples with higher sample noise intensity and samples with lower sample noise intensity on the loss function, it has a higher
  • the quality weight corresponding to the sample with sample noise intensity may be smaller than the quality weight corresponding to the sample with lower sample noise intensity.
  • Other noise characteristics e.g., noise rate, etc.
  • the quality weight may be negatively correlated with the noise characteristics of the sample.
  • the multiple training samples may include sample images with higher sample contrast (for example, sample CT images including objects with higher structural differences (for example, bones, lungs)) and sample images with lower sample contrast (For example, include a sample CT image of an object with low structural differences such as soft tissue (e.g., liver)).
  • sample images with higher sample contrast for example, sample CT images including objects with higher structural differences (for example, bones, lungs)
  • sample images with lower sample contrast For example, include a sample CT image of an object with low structural differences such as soft tissue (e.g., liver)
  • the quality weight corresponding to samples with higher sample contrast may be smaller than the quality weight corresponding to samples with lower sample contrast.
  • the quality weight may be negatively related to the sample contrast.
  • the quality weight corresponding to samples with higher sample artifact strength may be smaller than the quality weight corresponding to samples with lower sample artifact strength.
  • the quality weight may be inversely related to the artifact characteristics of the sample.
  • the quality weight corresponding to samples with higher sample motion intensity may be smaller than the quality weight corresponding to samples with lower sample motion intensity.
  • Other sports characteristics for example, sports position, etc.
  • exercise intensity types will not be repeated.
  • the quality weight is negatively related to the sample motion characteristics.
  • multiple training samples may include sample images with a higher sample resolution and sample images with a lower sample resolution.
  • optimizing samples with lower sample resolution can easily reduce the loss function. Therefore, in order to balance the influence of samples with higher sample resolution and samples with lower sample resolution on the loss function, the quality weight corresponding to samples with higher sample resolution can be greater than that of samples with lower sample resolution.
  • the quality weight may be positively related to the sample resolution.
  • the quality weight can be expressed as the following equation (3):
  • W i represents the i-th training sample corresponding to the i-th quality weight
  • a i represents the mass of the first sample of the i-th training sample
  • Bi represents a second quality characteristic of the i-th sample of the training samples
  • x and y may be greater than or less than zero.
  • the quality weight can be positively correlated with the quality feature of the first sample (or the quality feature of the second sample); when the value of x (or y) is less than zero, the quality weight can be It is negatively correlated with the quality characteristics of the first sample (or the quality characteristics of the second sample).
  • the values of x and y can be the default settings of the image quality optimization system 100, or they can be set by the operator of the image quality optimization system 100 to balance the quality characteristics of the first sample and the quality characteristics of the second sample to train the model Influence.
  • the processing device 120b may determine image processing models corresponding to different imaging object types and/or different scanning device types. More descriptions can be found elsewhere in this disclosure (for example, Figures 10-11 and their descriptions). In some embodiments, the processing device 120b may determine image processing models corresponding to different related features, that is, corresponding to acquisition parameter features, imaging parameter features, and/or imaging object features. In some embodiments, the processing device 120b may determine image processing models corresponding to different noise types, different artifact types, and/or different motion types.
  • the sample image and at least one sample related feature of the sample image are jointly used as the training sample to train the preliminary image processing model.
  • At least one sample-related feature involves multiple dimensions (for example, acquisition parameter feature, imaging parameter feature, imaging object feature, quality feature), which can prompt the preliminary image processing model to better learn the information of each dimension of the image, and then better complete the optimization Target.
  • Fig. 8 is an exemplary flow chart of obtaining training samples according to some embodiments of the present specification.
  • the process 800 may be executed online or offline by a processing device (for example, the processing device 120b or other processing devices).
  • the process 800 may be executed as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150, the memory 220, and/or the memory 390).
  • the processing device 120b (for example, the processor 210, the CPU 340, and/or one or more modules shown in FIG. 4B) may execute the instruction group, and when the instruction is executed, the processing device 120b may be configured to execute the process 800.
  • the operation of the illustrated flow presented below is for illustration.
  • the image processing model described in operation 530 of FIG. 5 may be an image processing model obtained according to the process 800.
  • step 810 the processing device 120b (for example, the acquiring module 450) acquires at least one qualified image.
  • the qualified image may be an image whose image quality meets a preset requirement.
  • the qualified image may be related to the training target of the image processing model, that is, related to the application scenario of the image processing model. For example, if the application scenario of the image processing model is denoising, the qualified image is an image whose noise meets a preset requirement (for example, the noise level is lower than a preset threshold). For another example, if the application scenario of the image processing model is de-artifacting, the qualified image is an image whose artifacts meet a preset requirement (for example, the artifact level is lower than a preset threshold).
  • the qualified image is an image whose resolution meets a preset requirement (for example, the resolution is higher than a preset threshold).
  • the application scenario of the image processing model is motion correction
  • the qualified image is an image whose motion meets a preset requirement (for example, the motion level is lower than a preset threshold).
  • the preset requirements may be system default values, or may be adjusted according to actual requirements.
  • step 820 the processing device 120b (for example, the acquisition module 450) preprocesses at least one qualified image to obtain multiple training samples.
  • preprocessing may include segmentation, adding noise, adding artifacts, reducing resolution, adding motion, etc., or any combination thereof.
  • preprocessing is also related to the training target of the image processing model, that is, related to the application scenario of the image processing model.
  • noise can be added through a noise adding algorithm.
  • the noise adding algorithm includes, but is not limited to, a linear congruence algorithm, a Mersenne rotation algorithm, a linear congruence algorithm with carry, and the like.
  • a qualified image may be preprocessed to obtain multiple sample images. For example, different levels of noise can be added to a qualified image to obtain multiple sample images.
  • a qualified image may be divided into multiple sub-images, and the multiple sub-images may be preprocessed respectively to obtain multiple sample images. For example, different levels of noise can be added to each sub-image to obtain multiple sample images.
  • FIGS. 9A and 9B are taken as examples for illustration.
  • the processes 900a and 900b shown in FIG. 9A and FIG. 9B take the example of adding noise to qualified images to obtain multiple sample images. It can be understood that the processes illustrated in FIGS. 9A and 9B are also applicable to adding artifacts, reducing resolution, and adding motion to qualified images to obtain multiple sample images, which is not limited in the embodiment of this specification.
  • different levels of noise can be added to the qualified image 910, such as first-level noise 920-1, second-level noise 920-2, ..., m-th level noise 920-m, and the corresponding noises of different levels are added.
  • the subsequent sample images are sample image 930-1, sample image 930-2,..., sample image 930-m.
  • the qualified image 910 can be segmented to obtain multiple sub-images, namely, sub-images 940-1, ..., sub-images 940-n; then, different levels of noise are added to the multiple sub-images to obtain corresponding additions.
  • Sample images with different levels of noise for example, add first-level noise 920-1, second-level noise 920-2, ..., m-th level noise 920-m to sub-image 940-1 to obtain sample image 930-1- 1.
  • step 830 the processing device 120b (for example, the acquisition module 450) determines that, for each of the multiple training samples, the corresponding qualified image is a standard image of the training sample.
  • the qualified image 910 is used as the standard image of the sample image 930-1, the sample image 930-2,..., and the sample image 930-m; as shown in FIG.
  • the image 940-1 (may be referred to as a "qualified sub-image") is used as the standard image of the sample image 930-1-1, the sample image 930-1-2, ..., the sample image 930-1-m.
  • FIG. 10 is an exemplary flowchart for acquiring an image processing model corresponding to the type of imaging device according to some embodiments of the present specification.
  • the process 1000 may be executed online or offline by the processing device 120b or other processing devices.
  • the process 1000 may be executed as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150).
  • the processing device 120b may execute the instruction group, and when the instruction is executed, the processing device 120b may be configured to execute the process 1000.
  • the operation of the illustrated flow presented below is for illustration.
  • the process 500 may be executed based on one or more operations of the process 1000.
  • the processing device 120b (for example, the acquisition module 450) acquires a plurality of qualified images related to the type of imaging device.
  • the qualified image may refer to an image whose image quality meets a preset requirement. Therefore, the qualified image related to the type of imaging device may refer to the qualified image produced by the type of imaging device.
  • the plurality of qualified images related to the type of imaging device may include a plurality of qualified CT images.
  • the plurality of qualified images related to the type of imaging device may include a plurality of qualified PET images.
  • multiple qualified images may be pre-generated and stored in a storage device (eg, storage device 150) disclosed elsewhere in the present invention.
  • the processing device 120b may directly obtain multiple qualified images from the storage device.
  • the processing device 120b may generate multiple qualified images based on imaging data generated by this type of imaging device.
  • step 1020 the processing device 120b (for example, the acquisition module 450) preprocesses multiple qualified images to generate multiple sample images.
  • preprocessing may include segmentation, adding noise, adding artifacts, adding motion, etc., or any combination thereof.
  • the processing device 120b may use an image segmentation algorithm to segment the qualified image into a plurality of qualified sub-images.
  • Image segmentation algorithms can include, but are not limited to, threshold-based segmentation algorithms, edge-based segmentation algorithms, region-based segmentation algorithms, clustering-based segmentation algorithms, wavelet transform-based image segmentation algorithms, mathematical morphology-based image segmentation algorithms, Image segmentation algorithms based on artificial neural networks, etc., or any combination thereof.
  • the processing device 120b may add different interference information (for example, with different levels of noise, artifacts, and/or motion) to the qualified image or qualified sub-image, and generate the same as the qualified image or qualified sub-image.
  • different interference information for example, with different levels of noise, artifacts, and/or motion
  • the processing device 120b may add noise with different levels and/or different types (eg, Gaussian noise, impulse noise, Rayleigh noise, exponentially distributed noise, uniformly distributed noise, random noise) to the qualified image or qualified sub-image .
  • the processing device 120b can separate different levels and/or different types of artifacts (e.g., streak artifacts, ring artifacts, shadow artifacts, band artifacts, windmill artifacts, streak artifacts, motion artifacts, etc.). Artifacts) are added to the qualified images or qualified sub-images. Similarly, the processing device 120b may also add different levels and/or different types of motion to the qualified image or qualified sub-image, which will not be repeated here.
  • artifacts e.g., streak artifacts, ring artifacts, shadow artifacts, band artifacts, windmill artifacts, streak artifacts, motion artifacts, etc.
  • the processing device 120b may adjust the resolution, contrast, gray scale, etc. of the qualified image or the qualified sub-image to generate a plurality of sample images corresponding to the qualified image or the qualified sub-image. For example, the resolution of the qualified image or the qualified sub-image is reduced in different levels.
  • the processing device 120b trains a preliminary image processing model based on the multiple sample images and at least one sample-related feature corresponding to the multiple sample images to obtain an image processing model corresponding to the type of imaging device.
  • the processing device 120b may obtain the preliminary image processing model described elsewhere in this specification.
  • the processing device 120b may acquire at least one sample quality feature corresponding to a plurality of sample images.
  • the processing device 120b may train a preliminary image processing model based on a plurality of sample images and a plurality of at least one sample-related feature, and generate an image processing model corresponding to the type of imaging device, wherein the label of the sample image is the corresponding qualified image or qualified image. Sub-image.
  • the training of the image processing model corresponding to the type of imaging device may be the same as or similar to the training described in step 720, and will not be repeated here.
  • Fig. 11 is an exemplary flowchart for obtaining an image processing model corresponding to an object type according to some embodiments of the present specification.
  • the process 1100 may be executed online or offline by the processing device 120b or other processing devices.
  • the process 1100 may be implemented as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150).
  • the processing device 120b may execute the instruction group, and when the instruction is executed, the processing device 120b may be configured to execute the process 1100.
  • the operation of the illustrated flow presented below is for illustration.
  • the process 500 may be executed based on one or more operations of the process 1100.
  • step 1110 the processing device 120b (for example, the acquisition module 450) acquires a plurality of qualified images related to the object type.
  • the qualified image may refer to an image whose image quality meets a preset requirement. Therefore, the qualified image related to the object type may refer to the qualified image including the object type.
  • the plurality of qualified images related to the object type may include a plurality of qualified breast images.
  • the plurality of qualified images related to the object type may include a plurality of qualified head images.
  • multiple qualified images may be pre-generated and stored in a storage device (eg, storage device 150) disclosed elsewhere in the present invention.
  • the processing device 120b may directly obtain multiple qualified images from the storage device.
  • the processing device 120b may generate a plurality of qualified images based on the imaging data related to the object type.
  • step 1120 the processing device 120b (for example, the acquisition module 450) preprocesses multiple qualified images to generate multiple sample images.
  • preprocessing may include segmentation, adding noise, adding artifacts, etc., or any combination thereof.
  • step 1120 please refer to step 1020 and related descriptions, which will not be repeated here.
  • the processing device 120b trains an initial image processing model based on a plurality of sample images and a plurality of sample quality characteristics corresponding to the plurality of sample images, and obtains an image processing model corresponding to the object type.
  • the processing device 120b may obtain the initial image processing model described elsewhere in this specification.
  • the processing device 120b may obtain at least one sample-related feature corresponding to a plurality of sample images.
  • the processing device 120b may train a preliminary image processing model based on a plurality of sample images and at least one sample related feature to generate an image processing model corresponding to the object type, wherein the label of the sample image is the corresponding qualified image or qualified sub-image.
  • the training of the image processing model corresponding to the object type may be the same as or similar to the training described in step 720, and will not be repeated here.
  • Fig. 12 is another exemplary flowchart of image quality optimization according to some embodiments of the present specification.
  • the process 1200 may be executed by the processing device 120a or other processing devices.
  • the process 1200 may be executed as a set of instructions (for example, an application program) stored in a storage device (for example, the storage device 150, the memory 220, and/or the memory 390).
  • the processing device 120a (for example, the processor 210, the CPU 340, and/or one or more modules shown in FIG. 4A) or other processing device may execute the instruction set, and when the instruction set is executed, the processing device 120a or other processing device may It is configured to execute the process 1200.
  • the operation of the illustrated flow presented below is for illustration.
  • the process 1200 may be completed by using one or more additional operations not described and/or one or more operations not discussed.
  • the operation sequence of the process 1200 shown in FIG. 12 and the content described below do not limit the scope of this specification.
  • step 1210 the processing device 120a (for example, the acquiring module 410) acquires the collected data to be processed.
  • step 610 For the specific details of collecting data, please refer to step 610 and its related description, which will not be repeated here.
  • step 1210 please refer to step 510 and its related description, which will not be repeated here.
  • step 1220 the processing device 120a (for example, the determining module 420) determines at least one relevant feature of the collected data.
  • the at least one related feature of the collected data may include at least one of a feature of a collection parameter related to the collected data, a feature of an imaging object, and a quality feature of the collected data.
  • step 1230 the processing device 120a (for example, the input module 430) inputs the collected data and at least one related feature into the image processing model.
  • step 1230 For the specific details of step 1230, please refer to step 530 and its related description, which will not be repeated here.
  • the processing device 120a determines the optimized image data of the collected data based on the output of the image processing model.
  • step 1240 For the specific details of step 1240, please refer to step 540 and its related description, which will not be repeated here.
  • FIG. 13A and FIG. 13B are the comparison results of image optimization using the optimization method of the embodiment of this specification and other methods according to some embodiments of this specification.
  • FIG. 13A is an optimized image after optimizing the liver image by an image processing model obtained by training without adding at least one relevant feature
  • FIG. 13B is an optimized image after adding an image processing model obtained by training on at least one relevant feature to optimize the liver image.
  • the optimized image in Figure 13A has noise left in the liver, and the noise reduction effect is poor, which may affect the subsequent analysis of the image, thereby affecting the normal diagnosis; while the optimized image in Figure 13B can The structure of the liver can be better restored, and there is no noise left over, and accordingly, accurate information can be provided for subsequent diagnosis. That is to say, according to the method described in the embodiment of this specification, the image processing model has better learning ability for noise distribution, better processing ability for noise, and better noise reduction effect.
  • the embodiment of this specification also provides a device for optimizing image quality.
  • the device includes a processor and a memory.
  • the memory is used to store instructions, and the processor is used to execute the instructions to implement the corresponding image quality optimization method described anywhere in this specification. Operation.
  • the embodiments of this specification also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions, and when the computer instructions are executed by the processor, the operations corresponding to the method for optimizing image quality described in any position of this specification are implemented.
  • the possible beneficial effects brought by the embodiments of this specification include but are not limited to: by introducing relevant features of multiple dimensions of the image (for example, acquisition parameter features, imaging parameter features, imaging object features, quality features) into the model training process, The image processing model can better learn the information of multiple dimensions of the image, so as to better achieve the purpose of image optimization. It should be noted that different embodiments may have different beneficial effects. In different embodiments, the possible beneficial effects may be any one or a combination of the above, or any other beneficial effects that may be obtained.
  • the computer storage medium may contain a propagated data signal containing a computer program code, for example on a baseband or as part of a carrier wave.
  • the propagated signal may have multiple manifestations, including electromagnetic forms, optical forms, etc., or a suitable combination.
  • the computer storage medium may be any computer readable medium other than the computer readable storage medium, and the medium may be connected to an instruction execution system, device, or device to realize communication, propagation, or transmission of the program for use.
  • the program code located on the computer storage medium can be transmitted through any suitable medium, including radio, cable, fiber optic cable, RF, or similar medium, or any combination of the above medium.
  • the computer program codes required for the operation of each part of this manual can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional programming languages such as C language, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or as an independent software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing equipment.
  • the remote computer can be connected to the user's computer through any network form, such as a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS service Use software as a service
  • numbers describing the number of ingredients and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "approximately”, “approximately” or “substantially” in some examples. Retouch. Unless otherwise stated, “approximately”, “approximately” or “substantially” indicates that the number is allowed to vary by ⁇ 20%.
  • the numerical parameters used in the specification and claims are approximate values, and the approximate values can be changed according to the required characteristics of individual embodiments. In some embodiments, the numerical parameter should consider the prescribed effective digits and adopt the method of general digit retention. Although the numerical ranges and parameters used to confirm the breadth of the ranges in some embodiments of this specification are approximate values, in specific embodiments, the setting of such numerical values is as accurate as possible within the feasible range.

Abstract

本说明书实施例公开了一种图像质量优化的方法及系统,所述方法包括:获取待处理图像;确定所述图像的至少一个相关特征,所述至少一个相关特征包括:与所述图像相关的采集参数特征、与所述图像相关的成像参数特征、成像对象特征和所述图像的质量特征中的至少一种;将所述图像和所述至少一个相关特征输入图像处理模型;基于所述图像处理模型的输出,确定所述图像的优化图像。

Description

一种图像质量优化的方法及系统
交叉引用
本申请要求2021年01月13日提交的中国专利申请号202110044133.9的优先权,其要求2020年05月18日提交的国际申请号PCT/CN2020/090862的优先权,全部内容通过引用并入本文。
技术领域
本说明书实施例涉及图像处理技术领域,特别涉及一种图像质量优化的方法及系统。
背景技术
随着医学成像技术的发展,对图像质量的优化,例如图像去噪和/或去伪影等,在医学图像处理过程中变得越来越重要。目前,可以将机器学习技术应用于图像质量的优化,然而,传统的机器学习技术无法取得较好的优化效果。因此,有必要提出一种图像质量优化的方法和系统,以提高机器学习模型在去噪、去伪影、运动矫正和/或提高分辨率方面的性能,从而提高医学图像处理的效果。
发明内容
本说明书实施例的一个方面提供一种图像质量优化的方法,包括:获取待处理图像;确定所述图像的至少一个相关特征,所述至少一个相关特征包括:与所述图像相关的采集参数特征、与所述图像相关的成像参数特征、成像对象特征和所述图像的质量特征中的至少一种;将所述图像和所述至少一个相关特征输入图像处理模型;基于所述图像处理模型的输出,确定所述图像的优化图像。
本说明书实施例的一个方面提供一种图像质量优化的方法,包括:获取待处理的采集数据;确定所述采集数据的至少一个相关特征,所述采集数据的至少一个相关特征包括:与所述采集数据相关的采集参数特征、成像对象特征和所述采集数据的质量特征中的至少一种;将所述采集数据和所述至少一个相关特征输入图像处理模型;基于所述图像处理模型的输出,确定所述采集数据的优化图像数据。
本说明书实施例的一个方面提供一种图像质量优化的系统,包括:获取模块,用于获取待处理图像;确定模块,用于确定所述图像的至少一个相关特征,所述至少一个相关特征包括:与所述图像相关的采集参数特征、与所述图像相关的成像参数特征、成像对象特征和所述图像的质量特征中的至少一种;输入模块,用于将所述图像和所述至少一个相关特征输入图像处理模型;优化模块,用于基于所述图像处理模型的输出,确定所述图像的优化图像。
本说明书实施例的一个方面提供一种图像质量优化的系统,包括:获取模块,用于获取待处理的采集数据;确定模块,用于确定所述采集数据的至少一个相关特征,所述采集数据的至少一个相关特征包括:与所述采集数据相关的采集参数特征、成像对象特征和所述采集数据的质量特征中的至少一种;输入模块,用于将所述采集数据和所述至少一个相关特征输入图像处理模型;优化模块,用于基于所述图像处理模型的输出,确定所述采集数据的优化图像数据。
本说明书实施例的一个方面提供一种图像质量优化的装置,所述装置包括处理器以及存储器,所述存储器用于存储指令,所述处理器用于执行所述指令,以实现如前任一项所述的图像质量优化的方法对应的操作。
本说明书实施例的一个方面提供一种计算机可读存储介质,所述存储介质存储计算机指令,所述计算机指令被处理器执行时,实现如前任一 项所述的图像质量优化的方法对应的操作。
附图说明
本说明书将以示例性实施例的方式进一步描述,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书的一些实施例所示的示例性图像质量优化系统的示意图;
图2是根据本说明书的一些实施例所示的示例性计算设备的硬件和/或软件部件的示意图;
图3是根据本说明书的一些实施例所示的示例性移动设备的硬件和/或软件部件的示意图;
图4A和图4B是根据本说明书的一些实施例所示的示例性处理设备的框图;
图5是根据本说明书的一些实施例所示的图像质量优化的示例性流程图;
图6A是根据本说明书的一些实施例所示的基于图像重建生成图像的噪声特征的示例性流程图;
图6B是根据本说明书的一些实施例所示的基于特征处理模型确定图像质量特征的示例性流程图;
图7是根据本说明书的一些实施例所示的图像处理模型的训练过程的示例性流程图;
图8是根据本说明书的一些实施例所示的获取训练样本的示例性流程图;
图9A和图9B是根据本说明书的一些实施例所示的对合格图像添加噪声得到样本图像的示例性流程图;
图10是根据本说明书的一些实施例所示的用于获取与成像设备类型相对应的图像处理模型的示例性流程图;
图11是根据本说明书的一些实施例所示的用于获取与对象类型相对应的图像处理模型的示例性流程图;
图12是根据本说明书的一些实施例所示的图像质量优化的另一示例性流程图;
图13A和图13B是根据本说明书的一些实施例所示的使用本说明书实施例的优化方法和其他方法进行图像优化的效果对比结果。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本说明书中所使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所 执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
将理解的是,当单元、引擎、模块或块被称为在另一单元、引擎、模块或块“上”、“连接至”或“耦合至”另一单元、引擎、模块或块时,它可以直接在其他单元、引擎、模块或块上,与之连接或与之通信,或者可以有中间单元、中间引擎、中间模块或中间块存在,除非上下文另有明确说明。在本说明书中,术语“和/或”可包括任何一个或以上相关所列条目或其组合。在本说明书中,术语“图像”用于统称图像数据(例如,扫描数据)和/或各种形式的图像,包括二维(2D)图像、三维(3D)图像、四维(4D)图像等。
本说明书的这些特征和其他特征,以及相关结构元件的操作方法和功能,以及部件和制造经济性的组合,在参考附图考虑以下描述后变得更加明显,所有这些构成了本说明书的一部分。然而,应明确理解,附图仅用于说明和描述,并不限制本说明书的范围。应当理解,图纸没有按照比例绘制。
本文提供了用于非侵入性生物医学成像和/或治疗,例如用于疾病诊断、治疗或研究目的的系统和组件。在一些实施例中,系统可以包括成像系统。成像系统可以包括单模态系统和/或多模态系统。本文所使用的术语“模态”广义上是指收集、产生、处理和/或分析对象的成像信息或对该对象进行治疗的成像或治疗方法或技术。单模态系统可以包括例如超声成像系统、X射线成像系统、计算机断层摄影(CT)系统、磁共振成像(MRI)系统、超声检查系统、正电子发射断层摄影(PET)系统、光学相干断层扫描(OCT)成像系统、超声(US)成像系统、血管内超声(IVUS)成像系统、近红外光谱(NIRS)成像系统等,或其任意组合。多模态系统可以包 括例如X射线成像-磁共振成像(X射线MRI)系统、正电子发射断层摄影-X射线成像(PET-X射线)系统、单光子发射计算机断层扫描磁共振成像(SPECT-MRI)系统、正电子发射断层扫描计算机断层摄影(PET-CT)系统、C型臂系统、正电子发射断层扫描磁共振成像(PET-MR)系统、数字减影血管造影-磁共振成像(DSA-MRI)系统等。在一些实施例中,医疗系统可以包括治疗系统。治疗系统可以包括治疗计划系统(TPS)、图像引导放射治疗(IGRT)等。图像引导放射治疗(IGRT)可以包括治疗设备和成像设备。该治疗设备可以包括线性加速器、回旋加速器、同步加速器等,其被配置为对受试者进行放射治疗。处理设备可以包括多种粒子种类的加速器,例如,光子、电子、质子或重离子。成像设备可以包括MRI扫描仪、CT扫描仪(例如,锥形束计算机断层摄影(CBCT)扫描仪)、数字放射学(DR)扫描仪、电子门成像设备(EPID)等。以下描述的医疗系统仅出于说明的目的而提供,并不限制本说明书的范围。
在本说明书中,受试者可以包括生物体和/或非生物体。生物体可以是人类、动物、植物或其特定部分、器官和/或组织。例如,受试者可以包括头部、颈部、胸部、心脏、胃、血管、软组织、肿瘤、结节等,或其任意组合。在一些实施例中,受试者可以是具有生命或没有生命的有机和/或无机物质的人造组合物。在本说明书中,当受试者代表人类、动物、植物或其特定部分等,术语“成像对象”或“受试者”可以互换使用,当受试者器官和/或组织等,术语“成像目标”或“受试者”可以互换使用。
在本说明书中,图像中的对象(例如,患者、受试者或其一部分)的表示可以简称为对象。例如,图像中的器官和/或组织(例如,心脏、肝脏、肺)的表示可以简称为器官或组织。进一步地,包括对象的表示的图像可以简称为对象的图像或包括对象的图像。更进一步地,对图像中的对象的表示进行的操作可以简称为对对象的操作。例如,对包括图像中器官和/或 组织的表示的图像的一部分的分割可以简称为器官和/或组织的分割。
图1是根据本说明书的一些实施例所示的示例性图像质量优化系统的示意图。
如图1所示,图像质量优化系统100可以包括成像设备110、处理设备120、终端130、网络140以及存储设备150。图像质量优化系统100中的各个部件可以以多种方式相连接。例如,成像设备110与处理设备120可以通过网络140连接,也可以直接连接(如图1中连接成像设备110和处理设备120的虚线箭头所示)。又例如,存储设备150与处理设备120可以直接连接或通过网络140连接。又例如,终端130与处理设备120可以通过网络140连接,也可以直接连接(如图1中连接终端130和处理设备120的虚线箭头所示)。
成像设备110可以扫描位于扫描区域内的成像对象并产生与成像对象相关的成像数据(也可以称之为“生数据”或“扫描数据”)。成像对象可以包括生物对象(例如,人体、动物等)、非生物对象(例如,体模)等。在一些实施例中,成像设备110可以是计算机断层扫描(CT)成像设备、正电子发射型计算机断层(PET)成像设备、磁共振(MRI)成像设备、单光子发射计算机断层扫描(SPECT)成像设备、发射计算机断层(ECT)成像设备、超声(UI)成像设备、数字化X射线(DR)成像设备等或其任意组合(例如,PET-CT成像设备、PET-MRI成像设备等)。
在一些实施例中,成像设备110可以包括机架111、探测器112、成像区域113、和扫描床114。机架111可以支撑探测器112。可以将成像对象放置于扫描床114上并进入成像区域113以接受扫描。在一些实施例中,探测器112可以包括一个或多个探测器单元。探测器单元可以包括单排探测器和/或多排探测器。探测器单元可以包括闪烁探测器(例如,碘化铯探测器)或其他探测器。在一些实施例中,机架111可以围绕机架旋转轴线 顺时针或逆时针旋转。在一些实施例中,成像设备110可以进一步包括X射线球管(图中未示出),X射线球管可以与机架111一同旋转。X射线球管可以发出放射线束(例如,X射线)到成像对象,经成像对象衰减后被探测器112探测到,进而产生成像数据。
处理设备120可以处理从成像设备110、终端130和/或存储设备150获取的数据和/或信息。例如,处理设备120可以基于成像设备110获取的成像数据,生成图像。又例如,处理设备120可以对生成的图像进行质量优化。具体地,处理设备120可以确定待处理图像的至少一个相关特征。至少一个相关特征可以包括与图像相关的采集参数特征、与图像相关的成像参数特征、成像对象特征、图像的质量特征等中的一种或多种。处理设备120可以将图像和至少一个相关特征输入图像处理模型。处理设备120可以基于图像处理模型的输出,确定图像的优化图像。在一些实施例中,处理设备120可以是单个服务器或服务器组。服务器组可以是集中式的,也可以是分布式的。在一些实施例中,处理设备120可以是本地的或远程的。
终端130可以包括移动设备131、平板电脑132、笔记本电脑133等或其任意组合。在一些实施例中,终端130可以通过网络140与图像质量优化系统100中的其他组件交互。例如,终端130可以向成像设备110发送一个或多个控制指令以控制成像设备110按照指令对成像对象进行扫描。在一些实施例中,移动设备131可以包括但不限于智能手机、掌上游戏机、智能眼镜、智能手表、可穿戴设备、虚拟显示设备、显示增强设备等或其任意组合。在一些实施例中,终端130可以是处理设备120的一部分。在一些实施例中,终端130可以与处理设备120整合为一体,作为成像设备110的操作台。例如,图像质量优化系统100的用户/操作者(例如,医生)可以通过该操作台控制成像设备110的运行,例如,对成像对象进行扫描。
网络140可以包括能够促进图像质量优化系统100的信息和/或数据交换的任何合适的网络。例如,网络140可以包括有线网络、光纤网络、电信网络、局域网、无线局域网(WLAN)、城域网(MAN)、公用电话交换网(PSTN)、蓝牙 TM网络、ZigBee TM网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络140可以包括一个或多个网络接入点。例如,网络140可以包括有线和/或无线网络接入点,例如基站和/或因特网交换点,通过网络接入点,图像质量优化系统100的一个或多个组件可以连接网络140以交换数据和/或信息。
存储设备150可以存储数据(例如,成像对象的扫描数据)、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从成像设备110、终端130和/或处理设备120处获取的数据,例如,存储设备150可以存储从成像设备110获取的治疗计划、成像对象的扫描数据等。在一些实施例中,存储设备150可以存储处理设备120可以执行或使用以执行本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150可包括大容量存储器、可移除存储器、易失性读写存储器、只读存储器(ROM)等或其任意组合。大容量存储可以包括磁盘、光盘、固态硬盘、移动存储等。可移除存储器可以包括闪存驱动器、软盘、光盘、存储卡、ZIP磁盘、磁带等。易失性读写存储器可以包括随机存取存储器(RAM)。RAM可以包括动态随机存储器(DRAM)、双数据率同步动态随机存取存储器(DDR-SDRAM)、静态随机存取存储器(SRAM)、可控硅随机存取存储器(T-RAM)、零电容随机存取存储器(Z-RAM)等。ROM可以包括掩模只读存储器(MROM)、可编程的只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)、光盘只读存储器(CD-ROM)、数字多功能光盘的光盘等。在一些实施例中,存储设备150可以通过云平台实现。例如,云平台可以包括私有云、公共 云、混合云、社区云、分布式云、跨云、多云等或其任意组合。
在一些实施例中,存储设备150可以连接网络140,以与图像质量优化系统100中的一个或多个组件(例如,处理设备120、终端130等)之间实现通信。图像质量优化系统100中的一个或多个组件可以通过网络140读取存储设备150中的数据或指令。在一些实施例中,存储设备150可以是处理设备120的一部分,也可以是独立的,与处理设备120直接或间接相连。
应该注意的是,图像质量优化系统100的以上描述仅出于说明的目的而提供,并不限制本说明书的范围。对于本领域的普通技术人员来说,可以根据本说明书的描述,进行多种变化和修改。例如,图像质量优化系统100可以包括一个或多个附加组件和/或上述图像质量优化系统100的一个或多个组件可以省略。附加地或可替代地,图像质量优化系统100的两个或更多组件可以被集成到单个组件中。图像质量优化系统100的组件可以在两个或更多个子组件上实现。
图2是根据本说明书的一些实施例所示的示例性计算设备200的示例性硬件和/或软件部件的示意图。计算设备200可以用于实现如本文所述的图像质量优化系统100的任何组件。例如,处理设备120和/或终端130可以分别经由其硬件、软件程序、固件或其组合在计算设备200上实现。尽管为了方便仅示出了一个这样的计算设备,但是可以在多个相似平台上以分布式方式实现与本文所述的图像质量优化系统100有关的计算机功能,以分配处理负荷。如图2所示,计算设备200可以包括处理器210、存储器220、输入/输出(I/O)230和通信端口240。
处理器210可以根据本文描述的技术执行计算机指令(例如,程序代码)并执行处理设备120的功能。所述计算机指令可以包括执行在此描述的特定功能,例如例程、程序、组件、部件、数据结构、过程、模块和功 能。例如,处理器210可以执行从终端130和/或存储设备150获得的指令。在一些实施例中,处理器210可以包括一个或以上硬件处理器,诸如微控制器、微处理器、精简指令集计算机(RISC)、特定应用集成电路(ASIC)、特定应用指令集处理器(ASIP)、中央处理单元(CPU)、图形处理单元(GPU)、物理处理单元(PPU)、微控制器单元、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、高阶RISC机器(ARM)、可编程逻辑器件(PLD)、能够执行一个或以上功能的任何电路或处理器等,或其任何组合。
仅出于说明目的,计算设备200中仅说明了一个处理器。然而,应当注意的是,本说明书中的计算设备200还可以包括多个处理器,因此本说明书所述的由一个处理器执行的步骤和/或方法步骤也可由多个处理器共同或分别执行。例如,如果在本说明书中计算设备200的处理器执行操作A和操作B两者,则应该理解,操作A和操作B也可以由计算设备200中的两个或以上不同的处理器共同地或单独地执行(例如,第一处理器执行操作A并且第二处理器执行操作B,或者第一处理器和第二处理器共同执行操作A和B)。
存储器220可以存储从成像设备110、处理设备120、存储设备150、终端130和/或图像质量优化系统100的任何其他部件获得的数据/信息。在一些实施例中,存储器220可包括大容量存储器、可移动存储器、易失性读写内存、只读内存(ROM)等或其任意组合。在一些实施例中,存储器220可以存储一个或以上程序和/或指令以执行在本说明书中描述的示例性方法。
输入/输出230可以输入和/或输出信号、数据、信息等。在一些实施例中,输入/输出230可以实现用户与处理设备120的交互。在一些实施例中,输入/输出230可以包括输入设备和输出设备。示例性的输入设备可以包括键盘、鼠标、触控屏幕、麦克风等,或其任何组合。示例性的输出设备 可以包括显示设备、扬声器、打印机、投影机等,或其任何组合。显示设备的示例可以包括液晶显示器(LCD)、基于发光二极管(LED)的显示器、平板显示器、弯曲屏幕、电视设备、阴极射线管(CRT)、触摸屏幕等,或其任何组合。
通信端口240可以连接到网络(例如,网络140)以促进数据通信。通信端口240可以在处理设备120和成像设备110、终端130和/或存储设备150之间建立连接。连接可以是有线连接、无线连接、可以启用数据传输和/或接收的任何其他通信连接,和/或这些连接的任何组合。有线连接可以包括例如电缆、光缆、电话线等或其任意组合。无线连接可以包括例如蓝牙链路、Wi-Fi链路、WiMax链路、WLAN链路、紫蜂链路、移动网络链路(例如3G、4G、5G等)等或其组合。在一些实施例中,通信端口240可以是和/或包括标准化通信端口,诸如RS232、RS485等。在一些实施例中,通信端口240可以是专门设计的端口。例如,通信端口240可以根据医学数字成像和通信(DICOM)协议来设计。
图3是根据本说明书的一些实施例所示的示例性移动设备的硬件和/或软件部件的示意图。在一些实施例中,图像质量优化系统100的一个或多个组件(例如,终端130和/或处理设备120)可以在移动设备300上实现。
如图3所示,移动设备300可以包括通信平台310、显示器320、图形处理单元(GPU)330、中央处理单元(CPU)340、输入/输出350、内存360和存储器390。在一些实施例中,任何其他合适的部件,包括但不限于系统总线或控制器(未示出),也可包括在移动设备300内。在一些实施例中,移动操作系统370(如,iOS TM、Android TM、Windows Phone TM)和一个或以上应用程序380可以从存储器390加载到内存360中以便由CPU340执行。应用程序380可以包括浏览器或任何其他合适的移动应用程序,用于 从处理设备120接收和渲染与图像处理有关的信息或其他信息。与信息流的用户交互可以通过输入/输出350来实现,并通过网络140提供给处理设备120和/或图像质量优化系统100的其他组件。
为了实施本说明书描述的各种模块、单元及其功能,计算机硬件平台可用作本文中描述的一个或以上部件的硬件平台。这样的计算机的硬件元件、操作系统和编程语言本质上是常规的,并且假定本领域技术人员充分熟悉以适应那些技术从而生成如本文所述的图像。具有用户接口部件的计算机可用于实施个人计算机(PC)或任何其他类型的工作站或终端。若编程得当,计算机亦可用作服务器。相信本领域技术人员熟悉这种计算机设备的结构、编程和一般操作,因此,附图应该是不言自明的。
图4A和图4B是根据本说明书的一些实施例所示的示例性处理设备的框图。
在一些实施例中,如图1所述,处理设备120a和处理设备120b可以是处理设备120的实施例。在一些实施例中,处理设备120a和处理设备120b可以在同一设备或分离的设备上实现。例如,处理设备120a和处理设备120b都可以在计算设备200上实现。又例如,处理设备120a可以在移动设备300上实现,处理设备120b可以在计算设备200上实现。
处理设备120a和/或处理设备120b中的模块可以由有线连接或无线连接彼此连接或通信。
如图4A所示,如处理设备120a可以包括获取模块410、确定模块420、输入模块430以及优化模块440。
获取模块410可以被配置为从图像质量优化系统100的一个或多个组件获取数据和/或信息。例如,获取模块410可以用于从本说明书其他地方描述的存储设备或终端获取待处理图像。待处理图像可以指图像(例如,2D图像、3D图像、4D图像)或原始图像数据(例如,成像设备110获取 的原始CT数据)。又例如,获取模块410可以如本说明书中其他地方所述,从存储设备获取图像处理模型。图像处理模型可以是机器学习模型(例如,深度学习模型)。在一些实施例中,获取模块410可以根据图像的相关特征、成像设备类型、重建算法类型、成像对象类型、成像目标类型或优化目标选择图像处理模型。关于待处理图像和图像处理模型的更多描述可以参见本说明书的其他描述(例如,步骤510和530及其描述)。
确定模块420可以用于确定图像的至少一个相关特征,至少一个相关特征包括:与图像相关的采集参数特征、与图像相关的成像参数特征、成像对象特征和图像的质量特征中的至少一种。在一些实施例中,确定模块420可以根据优化目标(例如,减少噪声、减少伪影、提高分辨率、提高对比度、减少运动等)来确定质量特征。关于相关特征的更多描述可以参见本说明书中的其他地方(例如,步骤520及其描述)。
输入模块430可以用于为图像处理模型输入需要处理的数据。例如,输入模块430用于将图像和至少一个相关特征输入图像处理模型。在一些实施例中,输入模块430可以预处理图像并且将预处理的图像和相关特征输入到图像处理模型中。
优化模块440可以用于基于图像处理模型的输出,确定图像的优化图像。在一些实施例种,优化模块440可以对图像处理模型的输出进行后处理来确定优化图像。
关于优化图像的生成的更多描述可以参见本公开的其他地方(例如,图5中的操作530和540及其描述)。
如图4B所示,处理设备120b可以包括获取模块450和训练模块460。
获取模块450可以用于从图像质量优化系统100的一个或多个组件获取数据和/或信息。例如,获取模块450可以从本说明书的其他地方描述 的存储设备中获取多个训练样本及其分别对应的标准图像。又例如,获取模块450可以从本说明书中其他地方所述的存储设备中获取多个合格图像。获取模块410可以对多个合格图像预处理,生成多个训练样本。
训练模块460可以被配置为基于训练过程获得图像处理模型。在一些实施例中,训练模块460可以基于多个训练样本,训练初始图像处理模型。例如,训练模块460可以迭代训练初始图像处理模型,直到满足终止条件为止。响应于终止条件被满足,图像处理模型可以被最终确定。在一些实施例中,终止条件可以与损失函数的值相关。损失函数可以与质量权重正相关,质量权重可以与样本图像的质量特征相关。不同的训练样本可以对应不同的质量权重。
关于训练获得图像处理模型的更多描述可以参见本说明书中的其他地方(例如,图7和图8及其描述)。
应当理解,图4所示的模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本说明书的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于处理设备120的描述,仅为描述方便,并 不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。可以将两个或多个模块组合为一个模块,并且可以将任何一个模块划分为两个或多个单元。在一些实施例中,处理设备120a和/或处理设备120b可以共享两个或更多模块。例如,处理设备120a和处理设备120b可以共享一个公共的获取模块,即获取模块410和获取模块450可以通过单个模块实现。在一些实施例中,处理设备120a和/或处理设备120b可以包括一个或多个附加模块,例如用于存储数据的存储模块(未示出)。在一些实施例中,处理设备120a和处理设备120b可以集成到单个处理设备中。
图5是根据本说明书的一些实施例所示的图像质量优化的示例性流程图。在一些实施例中,流程500可以由处理设备120a或其他处理设备执行。在一些实施例中,流程500可以作为存储在存储设备(例如,存储设备150、存储器220和/或存储器390)中的一组指令(例如,应用程序)来执行。处理设备120a(例如,处理器210、CPU340和/或图4A中所示的一个或多个模块)或其他处理设备可以执行指令组,并且当执行指令组时,处理设备120a或其他处理设备可以被配置为执行流程500。下面呈现的所示流程的操作旨在说明。在一些实施例中,流程500可以利用未描述的一个或多个附加操作和/或没有讨论的一个或多个操作来完成。此外,图5中所示流程500的操作顺序和以下描述的内容并不限制本说明书的范围。
步骤510,处理设备120a(例如,获取模块410)可以获取待处理图像。
待处理图像可以指图像(例如,2D图像、3D图像、4D图像)和/或原始图像数据(例如,成像设备110获取的原始CT数据)。原始图像数据包括成像数据。在一些实施例中,待处理图像可以是基于成像设备110扫 描成像对象后得到的成像数据所产生的图像,例如,待处理图像可以是基于成像数据进行图像重建而得到的图像。重建算法可以包括反滤波(FBP)算法,代数重建技术(ART),局部重建算法(LocalRA),迭代重建算法等,或其任意组合。为了方便起见,以下以“图像”为例进行说明。
在一些实施例中,成像设备110可以是不同模态的成像设备,对应地,待处理图像也可以是不同模态的图像。例如,成像设备110可以是计算机断层扫描(CT)成像设备,对应地,待处理图像为计算机断层扫描(CT)图像。又例如,成像设备110可以是磁共振(MRI)成像设备,对应地,待处理图像为磁共振(MRI)图像。本说明书实施例并不对待处理图像的模态做任何限制。例如,待处理图像还可以是X射线图像、正电子发射断层扫描(PET)图像、单光子发射计算机断层(SPECT)图像等,或其任意组合。
在一些实施例中,待处理图像可以是具有较低图像质量的图像。例如,待处理处理可以包括但不限于:含伪影的图像、含噪声的图像、信噪比(SNR)低的图像、低对比度的图像、低分辨率的图像、或包含运动的图像(例如,图像中的像素点发生运动的图像)等。
在一些实施例中,待处理图像可以被预先生成并存储在本说明书中的其他地方描述的存储设备(例如,存储设备150、存储器220和/或存储器390)中。处理设备120a(例如,获取模块410)可以从存储设备获取待处理图像。在一些实施例中,待处理图像可以由处理设备120a生成。例如,处理设备120a可以指示成像设备110对成像对象或成像目标进行扫描以获取成像数据,并且处理设备120a可以基于成像数据生成待处理图像。在一些实施例中,获取模块410可以从成像设备110获取待处理图像。成像设备110基于对成像对象或成像目标进行扫描生成成像数据,并基于成像数据确定待处理图像。在一些实施例中,获取模块410可以调用相关接口,从外接设备获取待处理图像。
步骤520,处理设备120a(例如,确定模块420)可以确定图像的至少一个相关特征。
在一些实施例中,至少一个相关特征可以是与待处理图像直接和/或间接相关的特征。在一些实施例中,至少一个相关特征可以包括与图像相关的采集参数特征、与图像相关的成像参数特征、成像对象特征、图像的质量特征等或其任意组合。
在一些实施例中,与图像相关的采集参数特征可以是与图像对应的成像数据的采集过程相关的特征,例如,与成像设备110相关的特征。以成像设备为CT成像设备为例,采集参数特征可以包括电压参数、电流参数、扫描信号的滤过参数、探测器的尺寸、探测器对信号的响应特性、探测器对信号运动的敏感特性、探测器产生的噪声强度等或其任意组合。
电压参数可以是施加在X射线球管的阴极和阳极之间的电压值。电压参数可以反映X射线的穿透性,一般地,相同条件下电压参数越高,X射线穿透性越强,低密度的图像对比度越差,反之亦然。
电流参数可以是X射线球管的阴极上的电流值。电流参数可以反映成像对象在扫描时接收到的辐射剂量。电流参数越大,阴极产生的自由电子越多,轰击在X射线球管的阳极靶上的自由电子流数量越大,最终产生的X射线光子密度越大,成像对象接收到的辐射剂量越多。由于辐射剂量会影响CT图像的图像质量,因此,电流参数还可以反映图像的图像质量。
扫描信号的滤过参数可以是与X射线球管和探测器之间的滤波器相关的参数。X射线球管和探测器之间的滤波器可以过滤掉对成像没有帮助的X射线,减少人体的辐射剂量;或者根据人体的形状,适当的过滤掉部分边缘的X射线,使得经过人体衰减的X射线更加均匀。因此,扫描信号的滤过参数可以反映成像对象接收到的X射线的空间分布和/或强度分布。
探测器的尺寸可以包括探测器本身的尺寸和/或探测器内接收单元 的尺寸。
探测器对信号的响应特征可以包括响应增益曲线和/或谱响应非线性特征。在一些实施例中,响应增益曲线可以反映探测器对不同强度的X射线的响应比例。在一些实施例中,谱响应非线性特征可以反映探测器对不同能谱X射线的吸收情况和射束硬化效应。
探测器对信号运动的敏感特性可以包括焦点抖动响应敏感性。在一些实施例中,焦点抖动响应敏感性可以反映探测器因X射线球管焦点抖动而探测到的X射线的变化。例如,由于焦点抖动,焦点位置发生微量位移,探测器所探测到的X射线也会发生微弱变化,该变化将影响成像数据的质量,进而影响图像质量。
探测器产生的噪声强度可以是探测器中的电路产生的静电流导致的图像噪声的强度。在一些实施例中,探测器产生的噪声强度包括探测器的电子学噪声,探测器的电子学噪声是由于系统的固有物理限制产生的电子噪声。例如,探测器的光电倍增管中的电子噪声。又例如,数据获取系统中的电子噪声等。
在一些实施例中,采集参数特征还可以包括成像设备的机型、探测器的类型、X射线球管的类型(例如,X射线球管中包括的阳极靶的靶面类型)、成像设备的机械参数(例如,放射源到旋转中心的距离、放射源到探测器的距离等)等或其任意组合。
在一些实施例中,与图像相关的成像参数特征可以是与图像重建过程相关的参数。以CT图像为例,成像参数特征可以包括像素数量、像素间距、重建厚度、重建间距、卷积核参数等或其任意组合。
像素数量指图像中包括的所有的像素的数量(例如,512*512、1024*1024等)。像素间距指图像中相邻两个像素点之间的距离。
重建厚度指CT的扫描层的厚度,即一层图像的厚度。在一些实施例 中,可以根据扫描部位、病变位置或诊断医生的需求等设置不同的重建厚度。在一些实施例中,重建厚度可以是10mm、7mm、5mm、1mm等。重建间距指CT的扫描层之间的间距,即,两层图像之间的间距。
卷积核参数可以包括卷积核的数量、卷积核的尺寸等。
在一些实施例中,成像对象特征可以包括成像对象的身材(例如,身高、体重、体型)、成像对象的年龄、成像对象的性别、成像对象的身体密度、成像对象对X射线的衰减分布、成像对象中成像目标(例如,器官、组织)的密度、成像目标对X射线的衰减分布、成像对象的多层扫描层中每层的密度、成像对象的多层扫描层中每层对X射线的衰减分布、成像目标的多层扫描层中每层的密度和成像目标的多层扫描层中每层对X射线的衰减密度等或其任意组合。
成像对象特征会直接或间接影响X射线的衰减,进而影响图像质量。例如,体型较胖的成像对象对应的X射线的衰减相对较大,相应地,图像的噪声可能相对较大;体型较瘦的成像对象对应的X射线的衰减相对较小,相应地,图像的噪声可能相对较小。又例如,不同的成像目标(例如,不同的器官)对X射线的衰减的影响不同,相应地,对图像质量的影响也不同。
在一些实施例中,确定模块420可以基于历史数据获取成像对象特征。例如,确定模块420基于成像对象的历史图像(例如,历史拍摄的CT图像、X-ray图像、MRI图像、PET-CT图像等,确定成像对象中成像目标(例如,器官、组织)的密度。在一些实施例中,确定模块420可以基于实时采集的数据获取成像对象特征。例如,确定模块420可以基于摄像机115实时采集的图像或其他传感器采集的数据(例如,设置于扫描床114上的体重计),确定成像对象的身材、成像对象的身体密度等。在一些实施例中,确定模块420可以基于用户(例如,医生)输入的信息,确定成像对象特征。例如,用户可以通过终端130输入成像对象的年龄、成像对象的 性别等。
X射线衰减分布代表X射线衰减值得分布。在一些实施例中,确定模块420可以通过定位像(例如,CT定位像)获取成像对象或成像目标的多层扫描层中每层对X射线的衰减分布。在一些实施例中,确定模块420可以通过等效水膜直径确定成像对象、成像目标、成像对象或成像目标的多层扫描层中每层对X射线的衰减分布。例如,通过定位像确定等效水膜直径,并通过标准换算方式,将等效水膜直径换算成X射线的衰减分布。
在一些实施例中,质量特征可以包括图像的噪声特征、伪影特征、运动特征、灰度特征、分辨率、对比度等或其任意组合。
在一些实施例中,噪声特征可以包括噪声分布、噪声强度、噪声率等,或其任意组合。
噪声强度可以是指噪声像素的值,其可以反映噪声像素中的噪声幅度。全局噪声强度可以反映图像中的平均噪声强度或加权平均噪声强度。噪声分布可以反映图像中不同噪声强度或不同噪声幅度的分布情况(例如,不同噪声强度的概率密度)。
在一些实施例中,噪声分布可以以噪声分布图或噪声分布函数等方式表示。在一些实施例中,确定模块420可以基于图像重建生成图像的噪声分布。关于基于图像重建生成噪声分布的具体细节可以参见图6A及其相关描述,在此不再赘述。
在一些实施例中,确定模块420还可以通过噪声提取算法确定噪声分布。在一些实施例中,噪声提取算法可以包括噪声统计模型(例如,高斯噪声模型、脉冲噪声模型、瑞利噪声模型、指数分布噪声模型、均匀分布噪声模型和其他随机噪声模型)和/或与噪声统计模型相对应的概率密度函数(probability density function,PDF)等。
噪声率可以反映出噪声分布的分散程度。在一些实施例中,确定模 块420可以基于噪声分布的方差和/或标准偏差,确定噪声率。
在一些实施例中,处理设备120a可以使用图像块方法、滤波器法、空域采样法、贝叶斯估计算法等或其任意组合来确定图像的噪声特征。
在一些实施例中,图像的伪影可以是指图像中与成像对象实际存在的任何部分不对应的部分,其可以导致图像失真、图像重叠、图像缺失。在一些实施例中,伪影特征可以包括伪影分布、伪影强度、全局伪影强度、伪影率等或其任意组合。
伪影强度可以指伪影像素的值,反映伪影像素中的伪影幅度。全局伪影强度可以反映图像中的平均伪影强度或加权平均伪影强度。在伪影分布可以反映图像中不同伪影强度的分布情况(例如,不同伪影强度的概率密度)。在一些实施例中,可以以伪影分布图或伪影分布函数等方式表示伪影分布。伪影率可以反映伪影分布的分散程度。
在一些实施例中,确定模块420可以识别图像中的伪影,并确定伪影特征。在一些实施例中,处理设备120a可以使用特征提取算法确定伪影特征。特征提取算法可以包括方向梯度直方图(HOG)、局部二值模式(LBP)算法、尺度不变特征变换匹配(SIFT)算法、哈尔特征(Haar-like)算法、灰度共生矩阵法(GLCM)、霍夫变换、傅里叶变换、傅里叶形状描述符法、形状参数法、有限元法(FEM)、旋转函数和小波描述符等。与噪声特征类似地,处理设备120a还可以基于统计模型和/或与统计模型相对应的概率密度函数(PDF)确定伪影特征。
在一些实施例中,运动特征可以反映图像中像素点的运动情况。运动特征可以包括运动强度、运动方向和运动位置等或其任意组合。在一些实施例中,确定模块420可以通过运动向量提取算法得到运动向量分布。在一些实施例中,运动向量提取算法可以包括运动矢量场自适应搜索算法(motion vector field adaptive search technique,MVFAST)、图像块匹配运动 估计算法、像素法、以及特征法等或其任意组合。
在一些实施例中,灰度特征可以包括灰度分布和/或全局灰度分布。在一些实施例中,灰度分布可以反映图像中像素的灰度值的分布情况。全局灰度分布可以反映图像中像素的灰度值的整体分布情况(例如,像素的平均灰度值和/或像素的加权平均灰度值)。在一些实施例中,确定模块420可以利用方向梯度直方图(HOG)和/或局部二值模式(LBP)算法确定灰度特征。
在一些实施例中,分辨率可以是单位英寸图像中所包含的像素点数。一般地,图像的分辨率越高,图像就越清晰。
在一些实施例中,对比度可以指图像中明暗区域最亮的白和最暗的黑之间不同亮度层级的测量,即指一幅图像灰度反差的大小。一般地,对比度越大,图像越清晰,色彩越鲜明。
在一些实施例中,确定模块420可以基于机器学习模型确定上述质量特征。具体地,确定模块420可以将采集参数特征、成像参数特征和成像对象特征中的至少一种输入特征处理模型;并基于特征处理模型的输出,确定图像的质量特征。关于基于机器学习模型确定质量特征的具体细节可以参见图6B及其相关描述,在此不再赘述。
在一些实施例中,确定模块420可以根据图像质量优化目标(或图像质量优化的应用场景),确定不同的质量特征。例如,如果图像质量优化目标为去噪,则质量特征为噪声特征;如果图像质量优化目标为去伪影,则质量特征为伪影特征;如果图像质量优化目标为运动矫正,则质量特征为运动特征;如果图像质量优化目标为提高分辨率,则质量特征为分辨率;如果图像质量优化目标为提升图像的对比度,则质量特征为对比度;如果图像质量优化目标为散射矫正,则质量特征为伪影特征。由于散射校正的本质是为了校正X射线的散射导致的实际采集的数据与估计的不符。散射 是指X射线在探测器或其内部结构件表面进行了折射产生的额外射线,由于散射的存在,会导致重建出的图像中出现伪影。可以理解的,以上示例仅为实现图像质量优化目标所需的最少质量特征,还可以根据实际需求选择多个质量特征,以加强对图像质量的表示。例如,图像质量优化目标为对比度,质量特征为对比度、分辨率和对比度。
步骤530,处理设备120a(例如,输入模块430)可以将图像和至少一个相关特征输入图像处理模型。
在一些实施例中,图像处理模型可以是预先训练好的机器学习模型,用于基于图像和/或至少一个相关特征来优化图像,以实现对应于图像的优化目标。在一些实施例中,图像处理模型可以是深度学习模型。深度学习模型可以包括深度神经网络(DNN)模型、多层神经网络(MLP)模型、卷积神经网络(CNN)模型、生成对抗神经网络(GAN)模型、和/或深层卷积编码解码(DCED)神经网络模型。关于图像处理模型的训练过程可以参见图7及其相关描述,在此不再赘述。
在一些实施例中,处理设备120a可以基于采集参数特征、成像参数特征和成像对象特征中的至少一个,选择图像处理模型。例如,处理设备120可以通过输入样本图像和样本采集参数特征,训练采集参数特征对应的图像处理模型,相应的,当基于图像及其采集参数特征进行优化时,处理设备120可以获取与采集参数特征对应的样本处理图像。又例如,处理设备120通过输入样本图像、样本采集参数特征和样本成像对象特征,训练采集参数特征和成像对象特征对应的图像处理模型,相应的,当基于图像及其采集参数特征和成像对象特征进行优化时,处理设备120可以获取采集参数特征和成像对象特征对应的样本处理图像。可以理解的,处理设备120基于相关特征还可以进行其他组合,并获取对应的样本处理模型。
基于采集参数特征和/或成像参数特征可以反映获取图像的成像设 备类型(即成像设备的模态,例如,CT设备、MRI设备或PET设备);基于成像对象特征可以反映图像中的成像对象类型或成像目标类型。在本说明书中,成像对象和/或成像目标可以简称为“对象”,成像对象类型和/或成像目标类型可以简称为“对象类型”。
在一些实施例中,处理设备120a可以基于对象类型,选择对象类型对应的图像处理模型。在一些实施例中,对象类型对应的图像处理模型(例如,“胸部”对应的图像处理模型)可以基于与对象类型有关的多个训练样本(例如,样本胸部图像)训练得到。关于训练得到对应于对象类型的图像处理模型的具体细节可以参见图11及其相关描述,在此不再赘述。在一些实施例中,处理设备120可以基于成像设备类型,选择成像设备类型对应的图像处理模型。在一些实施例中,成像设备类型对应的图像处理模型(例如,CT设备对应的图像处理模型)可以基于与成像设备类型有关的多个训练样本(例如,样本CT图像),训练得到。关于训练得到成像设备类型对应的图像处理模型的具体细节可以参见图10及其相关描述,在此不再赘述。
在一些实施例中,成像参数特征还可以反映图像的重建算法。在一些实施例中,处理设备120可以基于重建算法,选择重建算法对应的图像处理模型。在一些实施例中,重建算法对应的图像处理模型(例如,迭代重建算法对应的图像处理模型)可以基于与重建算法有关的多个训练样本(例如,样本迭代重建图像)训练得到。
在一些实施例中,不同的重建算法可以引入不同的噪声类型和/或伪影类型。可以认为,处理设备120可以基于噪声类型、伪影类型或运动类型选择图像处理模型。类似地,噪声类型对应的图像处理模型(例如,对应于高斯噪声的图像处理模型)基于与噪声类型有关的多个训练样本(例如,样本高斯噪声图像)训练得。伪影类型对应的图像处理模型(例如,条状伪影对应的图像处理模型)可以基于与伪影类型有关的多个训练样本(例如, 样本条状伪影图像)训练得到。类似地,处理设备120可以训练得到运动类型对应的图像处理模型。
在一些实施例中,处理设备120a可以从图像质量优化系统100的存储设备(例如,存储设备150)或通过网络(例如,网络140)从外部资源获得图像处理模型。例如,图像处理模型可以预先由训练设备(例如,处理设备120)训练并存储在图像质量优化系统100的存储设备中。处理设备120可以访问存储设备并获取图像处理模型。在一些实施例中,图像处理模型可以由训练设备(例如,处理设备120)根据训练过程(例如,流程700、1000、1100等)生成。
在一些实施例中,输入模块430可以处理至少一个相关特征,并将处理后的特征输入图像处理模型。例如,输入模块430可以将至少一个相关特征转化为向量或矩阵形式,进行向量或矩阵的融合(例如,相加、相乘、拼接等),并将融合后的向量或矩阵输入图像处理模型。
在一些实施例中,在将图像和至少一个相关特征输入至图像处理模型前,处理设备120可以对图像进行预处理,预处理包括图像重采样或图像归一化等。仅作为示例,处理设备120可以对图像进行图像重采样来确定具有预设大小的重采样图像。然后,处理设备120可以对重新采样图像进行归一化,使得归一化后的图像的像素(或体素)值可以在预设范围内(例如,[-1,1])。进一步地,处理设备120可以将归一化的图像和至少一个相关特征输入图像处理模型。
步骤540,处理设备120a(例如,优化模块440)基于图像处理模型的输出,确定图像的优化图像。
在一些实施例中,如前文所述,确定模块420可以根据图像质量优化目标,确定不同的质量特征。相应地,优化模块440可以确定不同的优化图像。例如,如果图像质量优化目标为去噪或提高分辨率,则优化图像 为噪声水平低于待处理图像的图像(例如,降噪/去噪后的图像)或分辨率高于待处理图像的图像。又例如,如果图像质量优化目标为去伪影,则优化图像为伪影水平低于待处理图像的图像(例如,降伪影/去伪影后的图像)。
在一些实施例中,优化模块440可以将图像处理模型的输出确定为优化图像。在一些实施例中,确定优化图像后,优化模块440还可以对优化图像进行进一步处理,例如,平滑处理、调整分辨率或对比度、调整亮度等。在一些实施例中,优化模块440还可以将优化图像或处理后的优化图像传输至终端130以展示。在一些实施例中,优化模块440还可以将优化图像或处理后的优化图像传输至存储设备150以存储。
在本说明书实施例中,通过引入待处理图像的多个维度的相关特征(例如,采集参数特征、成像参数特征、成像对象特征、图像质量特征等),可以使图像处理模型更好地理解图像相关信息,从而更好地完成图像优化目标。
图6A是根据本说明书的一些实施例所示的基于图像重建生成图像的噪声特征的示例性流程图。在一些实施例中,流程600a可以由处理设备120a或其他处理设备执行。在一些实施例中,流程600a可以作为存储在存储设备(例如,存储设备150、存储器220和/或存储器390)中的一组指令(例如,应用程序)来执行。处理设备120a(例如,处理器210、CPU 340和/或图4A所示的一个或多个模块)可以执行指令组,并且当执行指令时,处理设备120a可以配置为执行流程600a。下面呈现的所示流程的操作旨在说明。在一些实施例中,流程600a可以利用未描述的一个或多个附加操作和/或没有讨论的一个或多个操作来完成。此外,图6A中所示流程600a的操作顺序和以下描述的内容并不限制本说明书的范围。
步骤610,处理设备120a(确定模块420)获取图像对应的采集数据的初始噪声特征。
在一些实施例中,图像对应的采集数据可以是指扫描阶段获取的扫描数据(或可以称之为“生数据”或“成像数据”)。以CT图像为例,其对应的扫描数据可以指探测器112探测到的X射线数据。
在一些实施例中,确定模块420可以通过多种方式得到初始噪声特征。在一些实施例中,确定模块420可以基于采集数据的采集信号确定初始噪声特征(例如,初始噪声分布)。具体地,确定模块420可以获取采集数据对应的至少一个采集信号强度;确定至少一个采集信号强度对应的至少一个噪声幅度;基于至少一个噪声幅度,确定采集数据的初始噪声特征。
在一些实施例中,采集信号强度可以是探测器112采集到的电信号的强度。具体地,探测器112可以采集穿过成像对象的X射线,将接收到的光子转化为电信号,相应地,电信号的强度可以表征探测器112采集到的光子数量。
在一些实施例中,确定模块420可以基于统计规律,确定至少一个采集信号强度对应的至少一个噪声幅度。在一些实施例中,统计规律可以是由X射线和探测器112相互作用形成的统计规律。在一些实施例中,统计规律可以包括探测器112接收的光子数量与噪声的关系。例如,探测器112接收到的光子数量可能与噪声的期望(例如,噪声幅度或噪声幅度的方差)成正比关系。如前所述,采集信号的强度可以表征探测器112采集到的光子数量,因此,确定模块420可以基于该统计规律,确定至少一个采集信号强度对应的至少一个噪声幅度。
步骤620,处理设备120a(例如,确定模块420)基于初始噪声特征和采集数据进行图像重建,生成图像的噪声特征。
在一些实施例中,确定模块420可以通过重建算法进行图像重建。重建算法可以包括傅里叶切片定理算法、滤波反投影(FBP)算法、扇束重 建算法、迭代重建算法、分析重建算法、基于压缩感测(CS)的算法等或其任意组合。在一些实施例中,由于图像重建需要重建参数(即,成像参数特征),因此,确定模块420可以基于初始噪声特征(例如,初始噪声分布)、采集数据和成像参数特征进行图像重建,生成图像的噪声特征(例如,噪声分布)。
在一些实施例中,确定模块420在确定了噪声特征中的任意一个特征时,相应的,可以确定噪声特征中的其他特征。例如,确定模块420在确定了噪声分布时,可以通过预设的转化关系,得到噪声强度或噪声率等。
图6B是根据本说明书的一些实施例所示的基于特征处理模型确定图像质量特征的示例性流程图。在一些实施例中,流程600b可以由处理设备120a或其他处理设备执行。在一些实施例中,流程600b可以作为存储在存储设备(例如,存储设备150、存储器220和/或存储器390)中的一组指令(例如,应用程序)来执行。处理设备120a(例如,处理器210、CPU340和/或图4A所示的一个或多个模块)可以执行指令组,并且当执行指令时,处理设备120a可以配置为执行流程600b。下面呈现的所示流程的操作旨在说明。在一些实施例中,流程600b可以利用未描述的一个或多个附加操作和/或没有讨论的一个或多个操作来完成。此外,图6B中所示流程600b的操作顺序和以下描述的内容并不限制本说明书的范围。
步骤630,处理设备120a(例如,确定模块420)将采集参数特征、成像参数特征和成像对象特征中的至少一种输入特征处理模型。
在一些实施例中,特征处理模型可以是预先训练好的机器学习模型。在一些实施例中,特征处理模型可以包括卷积神经网络(Convolutional Neural Networks,CNN)模型、支持向量机、朴素贝叶斯、决策树等。在一些实施例中,可以基于多个样本图像和多个样本图像分别对应的多个参考质量特征(参考质量特征作为样本图像对应的标签)训练初始特征处理模 型,得到特征处理模型。在一些实施例中,样本图像可以是成像对象的任何模态的医学图像,例如CT图像、X-ray图像、MR图像、PET-CT图像等。在一些实施例中,样本图像还可以是成像对象的体态图像。例如摄像机拍摄的患者的正面图像、侧面图像等。在一些实施例中,以一个样本图像为例,其对应的参考质量特征(即,标签)可以以质量特征图(例如,参考噪声分布图、参考伪影分布图、参考运动向量分布图)、质量特征矩阵等方式表示。
在一些实施例中,可以将多个样本图像和多个样本图像分别对应的多个参考质量特征输入至初始特征处理模型,初始特征处理模型可以处理多个样本图像并确定其对应的样本质量特征。进一步地,可以基于样本质量特征与参考质量特征之间的差异,迭代更新模型参数直至训练完成。例如,可以基于样本质量特征与参考质量特征之间的差异,确定损失函数的值,进一步通过调整模型参数,使损失函数值尽可能小,直至完成训练。
步骤640,处理设备120a(例如,确定模块420)基于特征处理模型的输出,确定图像的质量特征。
在一些实施例中,与参考质量特征类似,模型输出的待处理图像的质量特征可以以质量特征图(例如,噪声分布图、伪影分布图、运动向量分布图)、质量特征矩阵等方式表示。
图7是根据本说明书的一些实施例所示的图像处理模型的训练过程的示例性流程图。在一些实施例中,流程700可以由处理设备120b或其他处理设备在线或离线执行。例如,流程700可以被实现为存储在存储设备(例如,存储设备150、存储器220和/或存储器390)中的一组指令(例如,应用程序)。处理设备120b(例如,处理器210、CPU 340和/或图4B所示的一个或多个模块)可以执行指令组,并且当执行指令时,处理设备120b可以配置为执行流程700。下面呈现的所示流程的操作旨在说明。在 一些实施例中,在图5的操作530中描述的图像处理模型可以是根据流程700获得的图像处理模型。
步骤710,处理设备120b(例如,获取模块450)获取多个训练样本和多个训练样本分别对应的多个标准图像。
在一些实施例中,多个训练样本中的每一个包括样本图像和样本图像的至少一个样本相关特征。在一些实施例中,样本图像可以与图像处理模型的训练目标相关,即与图像处理模型的应用场景相关。例如,如果图像处理模型的应用场景为去噪,则样本图像为添加噪声后的图像。又例如,如果图像处理模型的应用场景为去伪影,则样本图像为添加伪影后的图像。又例如,如果图像处理模型的应用场景为提高分辨率,则样本图像为降低分辨率后的图像。在一些实施例中,多个训练样本可以对应于多种质量水平(例如,多种噪声强度、多种伪影强度)。
在一些实施例中,多个训练样本可以预先生成并存储在本说明书其他地方披露的存储设备(例如,存储设备150、或外部数据库)中。处理设备120可以直接从存储设备中获取多个训练样本。
在一些实施例中,可以基于历史数据确定多个样本图像和/或多个标准图像。例如,可以将成像对象对应的历史图像(例如,历史CT图像)作为样本图像,将该样本图像进行处理(例如,去噪、去伪影、提高分辨率,处理的方式与图像处理模型的训练目标相关)后得到的图像作为标准图像。
在一些实施例中,多个训练样本中的至少一部分可以由处理设备120生成。仅作为示例,处理设备120可以通过对至少一个合格图像进行预处理,得到多个样本图像和多个标准图像。合格图像可以指其质量特征满足质量要求的图像。关于获取多个样本图像和多个标准图像的具体细节可以参见图8及其相关描述,在此不再赘述。
在一些实施例中,至少一个样本相关特征可以包括:与样本图像相 关的样本采集参数特征、与样本图像相关的样本成像参数特征、样本成像对象特征、样本图像的样本质量特征等或其任意组合。在一些实施例中,样本质量特征包括样本图像的样本噪声特征、样本伪影特征、样本运动特征、样本灰度特征、样本分辨率、样本对比度等或其任意组合。关于至少一个样本相关特征的具体细节可参见步骤520中图像的至少一个相关特征,在此不再赘述。
在一些实施例中,多个训练样本中的每一个还包括至少一个样本相关特征中每一个的权重。通过对样本相关特征赋予不同的权重,初步图像处理模型可以对不同的训练样本采用不同的通道进行学习,使得训练好的图像处理模型可以对未学习过的训练样本具备更好的预测能力。
步骤720,处理设备120b(例如,训练模块460)基于多个训练样本和多个标准图像,训练初步图像处理模型,确定图像处理模型。
在一些实施例中,在将多个训练样本和多个标准图像输入至初步图像处理模型之前,还可以对多个训练样本和多个标准图像进行处理,并将处理后的多个训练样本和多个标准图像输入至初步图像处理模型。例如,可以将多个样本图像、多个样本图像的样本相关特征以及多个标准图像分别转换为第一矩阵、第二矩阵以及第三矩阵;融合第一矩阵、第二矩阵和/或第三矩阵;将融合后的矩阵输入初步图像处理模型。在一些实施例中,融合可以包括但不限于:矩阵相加、矩阵相乘、矩阵拼接等。在一些实施例中,融合还可以在输入至初步图像处理模型后进行。例如,在初步图像处理模型的输入层进行融合。
在一些情况下,在至少一个样本相关特征中,除样本质量特征以外,其他特征(例如,样本采集参数特征、样本成像参数特征、样本成像对象特征)通常为离散值,不能直接作为模型输入层的输入。相应地,在一些实施例中,可以将这些离散值转换为第四矩阵,将第四矩阵作为初步图像处理 模型的中间层的输入。
在一些实施例中,可以将多个训练样本和多个标准图像(作为训练样本的标签)输入至初步图像处理模型。在一些实施例中,初步图像处理模型可以包括如DNN模型、MLP模型、CNN模型、GAN模型,DCED网络模型之类的深度学习模型。在一些实施例中,初步图像处理模型可以包括至少一个初步模型参数。至少一个初步模型参数可以是图像质量优化系统100的默认设置,也可以基于不同情况调整。以CNN模型为例,至少一个初步模型参数可以包括卷积层数量、卷积核数量、卷积核大小、步长、每个卷积层的参数等,或其任意组合。
初步图像处理模型可以处理多个训练样本并确定其对应的样本优化图像。进一步地,可以基于多个样本优化图像和多个标准图像,迭代更新模型参数直至训练完成。例如,可以基于多个样本优化图像与标准图像之间的差异,确定损失函数的值,进一步通过调整模型参数,使损失函数值尽可能小,直至完成训练。
在一些实施例中,图像处理模型的损失函数可以与质量权重正相关;质量权重基于样本质量特征确定。在一些实施例中,质量权重可以与样本图像的质量水平有关。例如,可以根据以下等式(1)确定当前迭代中的损失函数:
Figure PCTCN2021094461-appb-000001
其中L表示损失函数,n表示多个训练样本的数量,x i表示第i个训练样本中的样本图像(可以称为“第i个样本图像”),f(x i)表示与第i个样本图像相对应的第i个预测的优化图像,y i表示与第i个样本图像相对应的第i个合格图像(作为标准图像),l(f(x i),y i)表示与第i个预测的优化图像和第i个合格图像之间的差别相关的损失(例如,平方损失、绝对值损失),w i表示与第i个训练样本对应的第i个质量权重。
在一些实施例中,可以根据以下等式(2)对损失函数进行进一步地归一化:
Figure PCTCN2021094461-appb-000002
其中m i表示第i个样本图像的第i个样本质量特征,c表示用于控制样本质量特征的标准化程度的常数。在一些实施例中,不同的训练样本可以对应于不同的质量权重。以特定的训练样本为例,可以根据训练样本中样本图像的一个或多个样本质量特征确定质量权重。例如,对于一个或多个样本质量特征中的每一个,处理设备120b可以确定与对应于样本质量特征的质量子权重。进一步地,处理设备120b可以基于与样本质量特征相对应的质量子权重,确定质量权重。例如,处理设备120可以将一个或多个质量子权重的综合结果(例如,加权求和、乘积、平均值)确定为质量权重。
在一些实施例中,根据不同的图像质量优化目标,质量权重和/或质量子权重可以与样本质量特征负相关或正相关,从而平衡不同样本对训练该模型的影响。例如,对于用于“减少噪声”的模型,多个训练样本可以包括具有较高样本噪声强度的样本图像和具有较低样本噪声强度的样本图像。在训练过程中,优化具有较高样本噪声强度的样本,损失函数容易降低,因此,为了平衡具有较高样本噪声强度的样本和具有较低样本噪声强度的样本对损失函数的影响,具有较高样本噪声强度的样本对应的质量权重可以小于具有较低样本噪声强度的样本对应的质量权重。其他噪声特征(例如,噪声率等)与噪声强度类似,不再赘述。在一些实施例中,质量权重可以与样本噪声特征负相关。
另一方面,多个训练样本可以包括具有较高样本对比度的样本图像(例如,包括具有较高结构差异的物体(例如,骨头、肺)的样本CT图像)和具有较低样本对比度的样本图像(例如,包括具有如软组织(例如,肝脏)之类较低结构差异的物体的样本CT图像)。在训练过程中,优化具有 较高样本对比度的样本,损失函数容易降低。因此,为了平衡具有较高样本对比度的样本和具有较低样本对比度的样本对损失函数的影响,具有较高样本对比度的样本对应的质量权重可以小于具有较低样本对比度的样本对应的质量权重。在一些实施例中,质量权重可以与样本对比度负相关。
又例如,类似地,对于用于“减少伪影”的模型,具有较高样本伪影强度的样本对应的质量权重可以小于具有较低样本伪影强度的样本对应的质量权重。在一些实施例中,质量权重可以与样本伪影特征负相关。
又例如,类似地,对于用于“减少运动”的模型,具有较高样本运动强度的样本对应的质量权重可以小于具有较低样本运动强度的样本对应的质量权重。其他运动特征(例如,运动位置等)与运动强度类型,不再赘述。在一些实施例中,质量权重与样本运动特征负相关。
又例如,对于用于“提高分辨率”的模型,多个训练样本可以包括具有较高样本分辨率的样本图像和具有较低样本分辨率的样本图像。在训练过程中,优化具有较低样本分辨率的样本,损失函数容易降低。因此,为了平衡具有较高样本分辨率的样本和具有较低样本分辨率的样本对损失函数的影响,具有较高样本分辨率的样本对应的质量权重可以大于具有较低样本分辨率的样本对应的质量权重。在一些实施例中,质量权重可以与样本分辨率正相关。
在一些实施例中,质量权重可以表示为以下等式(3):
Figure PCTCN2021094461-appb-000003
其中w i表示第i个训练样本相对应的第i个质量权重,A i表示第i个训练样本的第一样本的质量特征,Bi表示第i个训练样本的第二样本的质量特征,
Figure PCTCN2021094461-appb-000004
表示对应于第一样本的质量特征的第一质量子权重,
Figure PCTCN2021094461-appb-000005
表示对应于第二样本的质量特征的第二质量子权重,并且x和y可以大于或小于零。当x(或y)的值大于零时,质量权重可以与第一样本的质量特征(或 第二样本的质量特征)正相关;当x(或y)的值小于零时,质量权重可以与第一样本的质量特征(或第二样本的质量特征)负相关。x和y的值可以是图像质量优化系统100的默认设置,也可以是图像质量优化系统100的操作者设置的,以平衡第一样本的质量特征和第二样本的质量特征对模型训练的影响。
在一些实施例中,处理设备120b可以确定对应于不同成像对象类型和/或不同扫描设备类型的图像处理模型。更多描述可以参见本公开的其他地方(例如,图10-11及其描述)。在一些实施例中,处理设备120b可以确定对应于不同相关特征,即对应于采集参数特征、成像参数特征和/或成像对象特征的图像处理模型。在一些实施例中,处理设备120b可以确定对应不同噪声类型、不同伪影类型和/或不同运动类型等的图像处理模型。
根据本说明书实施例,在图像处理模型的训练过程中,会将样本图像和样本图像的至少一个样本相关特征共同作为训练样本训练初步图像处理模型。至少一个样本相关特征涉及多个维度(例如,采集参数特征、成像参数特征、成像对象特征、质量特征),可以促使初步图像处理模型更好地学习图像各个维度的信息,进而更好地完成优化目标。
图8是根据本说明书的一些实施例所示的获取训练样本的示例性流程图。在一些实施例中,流程800可以由处理设备(例如,处理设备120b或其他处理设备)在线或离线执行。例如,流程800可以作为存储在存储设备(例如,存储设备150、存储器220和/或存储器390)中的一组指令(例如,应用程序)来执行。处理设备120b(例如,处理器210、CPU 340和/或图4B所示的一个或多个模块)可以执行指令组,并且当执行指令时,处理设备120b可以配置为执行流程800。下面呈现的所示流程的操作旨在说明。在一些实施例中,在图5的操作530中描述的图像处理模型可以是根据流程800获得的图像处理模型。
步骤810,处理设备120b(例如,获取模块450)获取至少一个合格图像。
在一些实施例中,合格图像可以是图像质量满足预设要求的图像。在一些实施例中,合格图像可以与图像处理模型的训练目标相关,即与图像处理模型的应用场景相关。例如,如果图像处理模型的应用场景为去噪,则合格图像为噪声满足预设要求(例如,噪声水平低于预设阈值)的图像。又例如,如果图像处理模型的应用场景为去伪影,则合格图像为伪影满足预设要求(例如,伪影水平低于预设阈值)的图像。又例如,如果图像处理模型的应用场景为提高分辨率,则合格图像为分辨率满足预设要求(例如,分辨率高于预设阈值)的图像。又例如,如果图像处理模型的应用场景为运动矫正,则合格图像为运动满足预设要求(例如,运动水平低于预设阈值)的图像。在一些实施例中,预设要求可以是系统默认值,也可以根据实际需求调整。
步骤820,处理设备120b(例如,获取模块450)通过对至少一个合格图像进行预处理,得到多个训练样本。
在一些实施例中,预处理可以包括分割、添加噪声、添加伪影、降低分辨率、添加运动等或其任意组合。如前所述,类似地,预处理也与图像处理模型的训练目标相关,即与图像处理模型的应用场景相关。以图像处理模型的应用场景为去噪为例,在一些实施例中,可以通过加噪算法进行添加噪声。在一些实施例中,加噪算法包括但不限于线性同余算法、梅森旋转算法、带进位的线性同余算法等。
在一些实施例中,可以对一个合格图像进行预处理,得到多个样本图像。例如,可以针对一个合格图像添加不同水平的噪声,从而得到多个样本图像。在一些实施例中,可以将一个合格图像分割为多个子图像,并对多个子图像分别进行预处理,得到多个样本图像。例如,可以针对每一 个子图像添加不同水平的噪声,从而得到多个样本图像。
为了更加清楚、方便地描述基于合格图像得到样本图像的过程,以下图9A和图9B为例进行阐述。为了方便,图9A和图9B所示的流程900a和流程900b以对合格图像进行添加噪声得到多个样本图像为例。可以理解,图9A和图9B示意的流程同样适用于对合格图像添加伪影、降低分辨率、添加运动以得到多个样本图像,本说明书实施例并不对此进行限制。
如图9A所示,可以对合格图像910添加不同级别的噪声,如一级噪声920-1、二级噪声920-2、……、第m级噪声920-m,得到对应的添加了不同级别噪声后的样本图像,即样本图像930-1、样本图像930-2、……、样本图像930-m。
如图9B所示,可以将合格图像910进行分割得到多个子图像,即子图像940-1、……、子图像940-n;然后对多个子图像分别添加不同级别的噪声,得到对应的添加了不同级别噪声后的样本图像,例如,对子图像940-1添加一级噪声920-1、二级噪声920-2、……、第m级噪声920-m,得到样本图像930-1-1、样本图像930-1-2、……、样本图像930-1-m;……;对子图像940-n添加一级噪声920-1、二级噪声920-2、……、第m级噪声920-m,得到样本图像930-n-1、样本图像930-n-2、……、样本图像930-n–m。
步骤830,处理设备120b(例如,获取模块450)对于多个训练样本中的每一个,确定其对应的合格图像为训练样本的标准图像。
例如,如图9A所示,将合格图像910作为样本图像930-1、样本图像930-2、……、以及样本图像930-m的标准图像;如图9B所示,将合格图像910的子图像940-1(可以称之为“合格子图像”)作为样本图像930-1-1、样本图像930-1-2、……、样本图像930-1-m的标准图像。
图10是根据本说明书的一些实施例所示的用于获取与成像设备类型相对应的图像处理模型的示例性流程图。在一些实施例中,流程1000可 以由处理设备120b或其他处理设备在线或离线执行。在一些实施例中,流程1000可以被作为存储在存储设备(例如,存储设备150)中的一组指令(例如,应用程序)来执行。处理设备120b可以执行指令组,并且当执行指令时,处理设备120b可以配置为执行流程1000。下面呈现的所示流程的操作旨在说明。在一些实施例中,可以基于流程1000的一个或多个操作来执行流程500。
步骤1010,处理设备120b(例如,获取模块450)获取与成像设备类型相关的多个合格图像。
如前所述,合格图像可以是指图像质量满足预设要求的图像,因此,与成像设备类型相关的合格图像可以指由该类型的成像设备产生的合格图像。例如,对于CT设备,与成像设备类型相关的多个合格图像可以包括多个合格的CT图像。又例如,对于PET设备,与成像设备类型相关的多个合格图像可以包括多个合格的PET图像。
在一些实施例中,多个合格图像可以被预先生成并存储在本发明其他地方披露的存储设备(例如,存储设备150)中。处理设备120b可以直接从存储设备中获取多个合格图像。在一些实施例中,处理设备120b可以基于该类型的成像设备生成的成像数据来生成多个合格图像。
步骤1020,处理设备120b(例如,获取模块450)对多个合格图像预处理,生成多个样本图像。
在一些实施例中,预处理可以包括分割、添加噪声、添加伪影、添加运动等,或其任何组合。
在一些实施例中,处理设备120b可以利用图像分割算法将合格图像分割为多个合格子图像。图像分割算法可以包括但不限于基于阈值的分割算法、基于边缘的分割算法、基于区域的分割算法、基于聚类的分割算法、基于小波变换的图像分割算法、基于数学形态学的图像分割算法、基于人 工神经网络的图像分割算法等,或其任何组合。
在一些实施例中,处理设备120b,可以将不同的干扰信息(例如,具有不同等级的噪声、伪影和/或运动)添加到合格图像或合格子图像中,生成与合格图像或合格子图像相对应的多个样本图像。例如,处理设备120b可以将具有不同等级和/或不同类型的噪声(例如,高斯噪声、脉冲噪声、瑞利噪声、指数分布噪声、均匀分布噪声、随机噪声)添加到合格图像或合格子图像中。类似地,处理设备120b可以将不同等级和/或不同类型的伪影(例如,条状伪影、环状伪影、阴影状伪影、带状伪影、风车伪影、条纹伪影、运动伪影)添加到合格图像或合格子图像中。类似地,处理设备120b还可以将不同等级和/或不同类型的运动添加到合格图像或合格子图像中,此处不再赘述。
在一些实施例中,处理设备120b可以对合格图像或合格子图像的分辨率、对比度、灰度等进行调整,生成与合格图像或合格子图像相对应的多个样本图像。例如,将合格图像或合格子图像的分辨率进行不同等级的降低等。
步骤1030,处理设备120b(例如,训练模块460)基于多个样本图像和对应于多个样本图像的至少一个样本相关特征,训练初步图像处理模型,获得与成像设备类型相对应的图像处理模型。
在一些实施例中,处理设备120b可以获取本说明书中其他地方所描述的初步图像处理模型。处理设备120b可以获取对应于多个样本图像的至少一个样本质量特征。处理设备120b可以基于多个样本图像和多个至少一个样本相关特征,训练初步图像处理模型,生成与成像设备类型相对应的图像处理模型,其中,样本图像的标签为相对应的合格图像或合格子图像。与成像设备类型相对应的图像处理模型的训练可以与步骤720描述的训练相同或相似,在此不再赘述。
图11是根据本说明书的一些实施例所示的用于获取与对象类型相对应的图像处理模型的示例性流程图。在一些实施例中,流程1100可以由处理设备120b或其他处理设备在线或离线执行。在一些实施例中,流程1100可以被实现为存储在存储设备(例如,存储设备150)中的一组指令(例如,应用程序)。处理设备120b可以执行指令组,并且当执行指令时,处理设备120b可以配置为执行流程1100。下面呈现的所示流程的操作旨在说明。在一些实施例中,可以基于流程1100的一个或多个操作来执行流程500。
步骤1110,处理设备120b(例如,获取模块450)获取与对象类型相关的多个合格图像。
如前所述,合格图像可以是指图像质量满足预设要求的图像,因此,与对象类型相关的合格图像可以指包括该对象类型的合格图像。例如,对于对象类型“胸部”,与对象类型相关的多个合格图像可以包括多个合格的胸部图像。又例如,对于对象类型“头部”,与对象类型相关的多个合格图像可以包括多个合格头部图像。
在一些实施例中,多个合格图像可以被预先生成并存储在本发明其他地方披露的存储设备(例如,存储设备150)中。处理设备120b可以直接从存储设备中获取多个合格图像。在一些实施例中,处理设备120b可以基于与对象类型相关的成像数据,生成多个合格图像。
步骤1120,处理设备120b(例如,获取模块450)对多个合格图像预处理,生成多个样本图像。
在一些实施例中,预处理可以包括分割、添加噪声、添加伪影等,或其任何组合。关于步骤1120的更多细节可参见步骤1020及其相关描述,在此不再赘述。
步骤1130,处理设备120b(例如,训练模块460)基于多个样本图 像和对应于多个样本图像的多个样本质量特征,训练初始图像处理模型,获得与对象类型相对应的图像处理模型。
在一些实施例中,处理设备120b可以获取本说明书中其他地方所描述的初始图像处理模型。处理设备120b可以获取对应于多个样本图像的至少一个样本相关特征。处理设备120b可以基于多个样本图像和至少一个样本相关特征,训练初步图像处理模型,生成与对象类型相对应的图像处理模型,其中,样本图像的标签为相对应的合格图像或合格子图像。与对象类型相对应的图像处理模型的训练可以与步骤720描述的训练相同或相似,在此不再赘述。
图12是根据本说明书的一些实施例所示的图像质量优化的另一示例性流程图。在一些实施例中,流程1200可以由处理设备120a或其他处理设备执行。在一些实施例中,流程1200可以作为存储在存储设备(例如,存储设备150、存储器220和/或存储器390)中的一组指令(例如,应用程序)来执行。处理设备120a(例如,处理器210、CPU340和/或图4A中所示的一个或多个模块)或其他处理设备可以执行指令组,并且当执行指令组时,处理设备120a或其他处理设备可以被配置为执行流程1200。下面呈现的所示流程的操作旨在说明。在一些实施例中,流程1200可以利用未描述的一个或多个附加操作和/或没有讨论的一个或多个操作来完成。此外,图12中所示流程1200的操作顺序和以下描述的内容并不限制本说明书的范围。
步骤1210,处理设备120a(例如,获取模块410)获取待处理的采集数据。
关于采集数据的具体细节可以参见步骤610及其相关描述,在此不再赘述。关于步骤1210的具体细节可以参见步骤510及其相关描述,在此不再赘述。
步骤1220,处理设备120a(例如,确定模块420)确定采集数据的至少一个相关特征。
在一些实施例中,采集数据的至少一个相关特征可以包括:与采集数据相关的采集参数特征、成像对象特征和采集数据的质量特征中的至少一种。关于步骤1220的具体细节可以参见步骤520及其相关描述,在此不再赘述。
步骤1230,处理设备120a(例如,输入模块430)将采集数据和至少一个相关特征输入图像处理模型。
关于步骤1230的具体细节可以参见步骤530及其相关描述,在此不再赘述。
步骤1240,处理设备120a(例如,优化模块440)基于图像处理模型的输出,确定采集数据的优化图像数据。
关于步骤1240的具体细节可以参见步骤540及其相关描述,在此不再赘述。
图13A和图13B是根据本说明书的一些实施例所示的使用本说明书实施例的优化方法和其他方法进行图像优化的效果对比结果。
图13A为未加入至少一个相关特征训练得到的图像处理模型对肝脏图像进行优化后的优化图像;图13B为加入至少一个相关特征训练得到的图像处理模型对肝脏图像进行优化后的优化图像。根据图13A和图13B对比可以看出,图13A的优化图像中的肝脏处遗留有噪声,降噪效果较差,可能影响后续对图像的分析,进而影响正常诊断;而图13B的优化图像能更好地还原肝脏的结构,没有噪声遗留,相应地,可以为后续诊断提供准确信息。也就是说,根据本说明书实施例所述的方法,图像处理模型对于噪声分布具备更好的学习能力,对噪声具备更好的处理能力,降噪效果更佳。
本说明书实施例还提供一种图像质量优化的装置,该装置包括处理器以及存储器,存储器用于存储指令,处理器用于执行指令,以实现如本说明书任意位置所述的图像质量优化的方法对应的操作。
本说明书实施例还提供一种计算机可读存储介质。该计算机可读存储介质存储计算机指令,当计算机指令被处理器执行时,实现如本说明书任意位置所述的图像质量优化的方法对应的操作。
本说明书实施例可能带来的有益效果包括但不限于:通过将图像的多个维度的相关特征(例如,采集参数特征、成像参数特征、成像对象特征、质量特征)引入模型训练过程,可以使图像处理模型能更好地学习图像多个维度的信息,从而更好地完成图像优化目的。需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获取的有益效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本说明书的各方面可以通过若干 具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本说明书的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本说明书的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本说明书各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外, 对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (15)

  1. 一种图像质量优化的方法,其特征在于,包括:
    获取待处理图像;
    确定所述图像的至少一个相关特征,所述至少一个相关特征包括:与所述图像相关的采集参数特征、与所述图像相关的成像参数特征、成像对象特征和所述图像的质量特征中的至少一种;
    将所述图像和所述至少一个相关特征输入图像处理模型;
    基于所述图像处理模型的输出,确定所述图像的优化图像。
  2. 根据权利要求1所述的方法,其特征在于,所述采集参数特征包括:
    电压参数、电流参数、扫描信号的滤过参数、探测器的尺寸、所述探测器对信号的响应特性、所述探测器对所述信号运动的敏感特性和所述探测器产生的噪声强度中的至少一种。
  3. 根据权利要求1所述的方法,其特征在于,所述成像参数特征包括:
    像素数量、像素间距、重建厚度、重建间距和卷积核参数中的至少一种。
  4. 根据权利要求1所述的方法,其特征在于,所述成像对象特征包括:所述成像对象的身材、所述成像对象的年龄、所述成像对象的性别、所述成像对象的身体密度、所述成像对象对X射线的衰减分布、所述成像对象中成像目标的密度、所述成像目标对X射线的衰减分布、所述成像对象的多层扫描层中每层的密度、所述成像对象的多层扫描层中每层对X射线的衰减分布、所述成像目标的多层扫描层中每层的密度和所述成像目标的多层扫描层中每层对X射线的衰减密度中的至少一种。
  5. 根据权利要求1所述的方法,其特征在于,所述质量特征包括所述图像的噪声特征、伪影特征、运动特征、灰度特征、分辨率和对比度中的至少一个。
  6. 根据权利要求5所述的方法,其特征在于,
    所述质量特征包括所述图像的所述噪声特征,
    所述方法还包括确定所述图像的所述噪声特征,包括:
    获取所述图像对应的采集数据的初始噪声特征;
    基于所述初始噪声特征和所述采集数据进行图像重建,生成所述图像的所述噪声特征。
  7. 根据权利要求6所述的方法,其特征在于,所述获取所述图像对应的采集数据的初始噪声特征,包括:
    获取所述采集数据对应的至少一个采集信号强度;
    确定所述至少一个采集信号强度对应的至少一个噪声幅度;
    基于所述至少一个噪声幅度,确定所述采集数据的所述初始噪声特征。
  8. 根据权利要求1所述的方法,其特征在于,
    所述至少一个相关特征包括所述质量特征,
    所述方法还包括确定所述质量特征,包括:
    将所述采集参数特征、所述成像参数特征和所述成像对象特征中的至少一种输入特征处理模型;
    基于所述特征处理模型的输出,确定所述图像的所述质量特征。
  9. 根据权利要求1所述方法,其特征在于,所述方法还包括:
    基于所述采集参数特征、所述成像参数特征和所述成像对象特征中的 至少一个,选择所述图像处理模型。
  10. 根据权利要求1所述的方法,其特征在于,所述图像处理模型通过训练过程获取,所述训练过程包括:
    获取多个训练样本和所述多个训练样本分别对应的多个标准图像,所述多个训练样本中的每一个包括样本图像和所述样本图像的至少一个样本相关特征,所述至少一个样本相关特征包括:与所述样本图像相关的样本采集参数特征、与所述样本图像相关的样本成像参数特征、样本成像对象特征和所述样本图像的样本质量特征中的至少一种;
    基于所述多个训练样本和所述多个标准图像,训练初步图像处理模型,确定所述图像处理模型。
  11. 根据权利要求10所述的方法,其特征在于,所述图像处理模型的损失函数与质量权重正相关;所述质量权重基于所述样本质量特征确定。
  12. 一种图像质量优化的方法,其特征在于,包括:
    获取待处理的采集数据;
    确定所述采集数据的至少一个相关特征,所述采集数据的至少一个相关特征包括:与所述采集数据相关的采集参数特征、成像对象特征和所述采集数据的质量特征中的至少一种;
    将所述采集数据和所述至少一个相关特征输入图像处理模型;
    基于所述图像处理模型的输出,确定所述采集数据的优化图像数据。
  13. 一种图像质量优化的系统,其特征在于,包括:
    获取模块,用于获取待处理图像;
    确定模块,用于确定所述图像的至少一个相关特征,所述至少一个相 关特征包括:与所述图像相关的采集参数特征、与所述图像相关的成像参数特征、成像对象特征和所述图像的质量特征中的至少一种;
    输入模块,用于将所述图像和所述至少一个相关特征输入图像处理模型;
    优化模块,用于基于所述图像处理模型的输出,确定所述图像的优化图像。
  14. 一种图像质量优化的系统,其特征在于,包括:
    获取模块,用于获取待处理的采集数据;
    确定模块,用于确定所述采集数据的至少一个相关特征,所述采集数据的至少一个相关特征包括:与所述采集数据相关的采集参数特征、成像对象特征和所述采集数据的质量特征中的至少一种;
    输入模块,用于将所述采集数据和所述至少一个相关特征输入图像处理模型;
    优化模块,用于基于所述图像处理模型的输出,确定所述采集数据的优化图像数据。
  15. 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,所述计算机指令被处理器执行时,实现如权利要求1至12中任一项所述的图像质量优化的方法对应的操作。
PCT/CN2021/094461 2020-05-18 2021-05-18 一种图像质量优化的方法及系统 WO2021233315A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21808874.8A EP4141782A4 (en) 2020-05-18 2021-05-18 IMAGE QUALITY OPTIMIZATION METHOD AND SYSTEM
US17/446,303 US20210390694A1 (en) 2020-05-18 2021-08-30 Systems and methods for image quality optimization

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CNPCT/CN2020/090862 2020-05-18
PCT/CN2020/090862 WO2021232195A1 (en) 2020-05-18 2020-05-18 Systems and methods for image optimization
CN202110044133.9A CN113689342B (zh) 2020-05-18 2021-01-13 一种图像质量优化的方法及系统
CN202110044133.9 2021-01-13

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/446,303 Continuation US20210390694A1 (en) 2020-05-18 2021-08-30 Systems and methods for image quality optimization

Publications (1)

Publication Number Publication Date
WO2021233315A1 true WO2021233315A1 (zh) 2021-11-25

Family

ID=74535040

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/090862 WO2021232195A1 (en) 2020-05-18 2020-05-18 Systems and methods for image optimization
PCT/CN2021/094461 WO2021233315A1 (zh) 2020-05-18 2021-05-18 一种图像质量优化的方法及系统

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090862 WO2021232195A1 (en) 2020-05-18 2020-05-18 Systems and methods for image optimization

Country Status (4)

Country Link
US (2) US20210142476A1 (zh)
EP (2) EP4139881A1 (zh)
CN (2) CN112368738B (zh)
WO (2) WO2021232195A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112424835B (zh) 2020-05-18 2023-11-24 上海联影医疗科技股份有限公司 用于图像重建的系统和方法
US20220414525A1 (en) * 2021-06-23 2022-12-29 Varian Medical Systems, Inc. Machine learning approach for solving beam angle optimization
CN113706419A (zh) * 2021-09-13 2021-11-26 上海联影医疗科技股份有限公司 一种图像处理方法和系统
CN113810610A (zh) * 2021-09-16 2021-12-17 杭州海康威视数字技术股份有限公司 一种对象抓拍方法及装置
JP2023077988A (ja) * 2021-11-25 2023-06-06 キヤノン株式会社 放射線画像処理装置、放射線画像処理方法、画像処理装置、学習装置、学習データの生成方法、及びプログラム
CN114359667B (zh) * 2021-12-30 2024-01-30 西安交通大学 一种基于生成式对抗网络的强度相干识别方法及设备
WO2023123352A1 (en) * 2021-12-31 2023-07-06 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for motion correction for medical images
CN114066889B (zh) * 2022-01-12 2022-04-29 广州永士达医疗科技有限责任公司 一种oct主机的成像质量检测方法及装置
CN114494251B (zh) * 2022-04-06 2022-07-15 南昌睿度医疗科技有限公司 Spect图像处理方法以及相关设备
CN117038064B (zh) * 2023-10-07 2024-01-09 之江实验室 一种辅助分析算法的评估方法、装置、存储介质及设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103999087A (zh) * 2011-12-15 2014-08-20 皇家飞利浦有限公司 针对接收方优化的医学成像重建
CN104318524A (zh) * 2014-10-15 2015-01-28 烟台艾睿光电科技有限公司 一种基于YCbCr色彩空间的图像增强方法、装置及系统
CN106572826A (zh) * 2014-06-13 2017-04-19 帕洛代克斯集团有限公司 在x射线成像中的自动化剂量控制的系统和方法
US20170143312A1 (en) * 2014-09-03 2017-05-25 Contextvision Ab Methods and systems for automatic control of subjective image quality in imaging of objects
CN107301662A (zh) * 2017-06-30 2017-10-27 深圳大学 深度图像的压缩恢复方法、装置、设备及存储介质
CN110490118A (zh) * 2019-08-14 2019-11-22 厦门美图之家科技有限公司 图像处理方法及装置
CN110660123A (zh) * 2018-06-29 2020-01-07 清华大学 基于神经网络的三维ct图像重建方法和设备以及存储介质
CN110807737A (zh) * 2018-08-06 2020-02-18 通用电气公司 迭代图像重建框架

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4456926A (en) * 1982-06-21 1984-06-26 Thomson-Csf Broadcast, Inc. Enhancement of fluroscopically generated images
US20040068167A1 (en) * 2002-09-13 2004-04-08 Jiang Hsieh Computer aided processing of medical images
US7039222B2 (en) * 2003-02-28 2006-05-02 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US20080108895A1 (en) * 2006-11-06 2008-05-08 General Electric Company Method and system for defining at least one acquisition and processing parameter in a tomosynthesis system
CN100508891C (zh) * 2007-09-04 2009-07-08 陈武凡 在pet成像中最大后验优化图像重建方法
CN103620476A (zh) * 2011-06-30 2014-03-05 通用电气医疗集团生物科学公司 生物成像的图像质量优化
DE102013206415A1 (de) * 2013-04-11 2014-10-16 Siemens Aktiengesellschaft Automatische Gewinnung optimierter Ausgabedaten
US9916525B2 (en) * 2015-10-13 2018-03-13 Siemens Healthcare Gmbh Learning-based framework for personalized image quality evaluation and optimization
US10387765B2 (en) * 2016-06-23 2019-08-20 Siemens Healthcare Gmbh Image correction using a deep generative machine-learning model
DE102016215109A1 (de) * 2016-08-12 2018-02-15 Siemens Healthcare Gmbh Verfahren und Datenverarbeitungseinheit zum Optimieren eines Bildrekonstruktionsalgorithmus
CN106485680B (zh) * 2016-10-13 2017-11-28 上海联影医疗科技有限公司 图像校正方法及装置
CN106952239A (zh) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 图像生成方法和装置
CN107341516A (zh) * 2017-07-07 2017-11-10 广东中星电子有限公司 图像质量调节方法和图像处理智能平台
WO2019033390A1 (en) * 2017-08-18 2019-02-21 Shenzhen United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR IMAGE RECONSTRUCTION
EP3485815B1 (en) * 2017-11-21 2020-05-13 Siemens Healthcare GmbH Method for controlling a x-ray imaging device, x-ray imaging device, computer program and electronically readable storage medium
CN107945139B (zh) * 2017-12-15 2020-01-07 深圳大学 一种图像处理方法、存储介质及智能终端
CN108881708A (zh) * 2017-12-18 2018-11-23 南通使爱智能科技有限公司 一种智能图像处理装置
US10832451B2 (en) * 2018-07-18 2020-11-10 Siemens Healthcare Gmbh Machine learning in iterative image reconstruction
CN109242788A (zh) * 2018-08-21 2019-01-18 福州大学 一种基于编码-解码卷积神经网络低照度图像优化方法
US11062210B2 (en) * 2018-10-02 2021-07-13 Nokia Technologies Oy Method and apparatus for training a neural network used for denoising
CN110809782B (zh) * 2018-10-22 2023-09-12 上海联影医疗科技股份有限公司 衰减校正系统和方法
CN110151210B (zh) * 2019-05-21 2022-11-22 上海联影医疗科技股份有限公司 一种医学图像处理方法、系统、装置和计算机可读介质
CN110176047B (zh) * 2019-06-03 2022-12-06 上海联影医疗科技股份有限公司 一种提高ct图像质量的方法和系统
CN110610463A (zh) * 2019-08-07 2019-12-24 深圳大学 一种图像增强方法及装置
CN110853742B (zh) * 2019-11-12 2023-03-28 上海联影医疗科技股份有限公司 一种图像重建方法、系统、装置及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103999087A (zh) * 2011-12-15 2014-08-20 皇家飞利浦有限公司 针对接收方优化的医学成像重建
CN106572826A (zh) * 2014-06-13 2017-04-19 帕洛代克斯集团有限公司 在x射线成像中的自动化剂量控制的系统和方法
US20170143312A1 (en) * 2014-09-03 2017-05-25 Contextvision Ab Methods and systems for automatic control of subjective image quality in imaging of objects
CN104318524A (zh) * 2014-10-15 2015-01-28 烟台艾睿光电科技有限公司 一种基于YCbCr色彩空间的图像增强方法、装置及系统
CN107301662A (zh) * 2017-06-30 2017-10-27 深圳大学 深度图像的压缩恢复方法、装置、设备及存储介质
CN110660123A (zh) * 2018-06-29 2020-01-07 清华大学 基于神经网络的三维ct图像重建方法和设备以及存储介质
CN110807737A (zh) * 2018-08-06 2020-02-18 通用电气公司 迭代图像重建框架
CN110490118A (zh) * 2019-08-14 2019-11-22 厦门美图之家科技有限公司 图像处理方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4141782A4 *

Also Published As

Publication number Publication date
US20210142476A1 (en) 2021-05-13
EP4139881A4 (en) 2023-03-01
CN112368738B (zh) 2024-01-16
EP4139881A1 (en) 2023-03-01
CN113689342B (zh) 2024-04-02
US20210390694A1 (en) 2021-12-16
CN112368738A (zh) 2021-02-12
WO2021232195A1 (en) 2021-11-25
CN113689342A (zh) 2021-11-23
EP4141782A4 (en) 2023-10-11
EP4141782A1 (en) 2023-03-01

Similar Documents

Publication Publication Date Title
WO2021233315A1 (zh) 一种图像质量优化的方法及系统
US11554272B2 (en) System and method for diagnostic and treatment
US11694086B2 (en) Systems and methods for detecting region of interset in image
CN112424835B (zh) 用于图像重建的系统和方法
US20230033442A1 (en) Systems and methods of using self-attention deep learning for image enhancement
WO2021068975A1 (en) Systems and methods for image reconstruction
US11436720B2 (en) Systems and methods for generating image metric
EP4071706A1 (en) Medical data processing method, model generation method, medical data processing apparatus, and computer-readable non-transitory storage medium storing medical data processing program
US20220092755A1 (en) Systems and methods for image processing
US11672496B2 (en) Imaging systems and methods
US20230360312A1 (en) Systems and methods for image processing
US20230225687A1 (en) System and method for medical imaging
US20230342974A1 (en) Imaging systems and methods
WO2023123352A1 (en) Systems and methods for motion correction for medical images
WO2023123361A1 (en) Systems and methods for motion correction for a medical image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21808874

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021808874

Country of ref document: EP

Effective date: 20221122

NENP Non-entry into the national phase

Ref country code: DE