WO2023102749A1 - Image processing method and system - Google Patents

Image processing method and system Download PDF

Info

Publication number
WO2023102749A1
WO2023102749A1 PCT/CN2021/136183 CN2021136183W WO2023102749A1 WO 2023102749 A1 WO2023102749 A1 WO 2023102749A1 CN 2021136183 W CN2021136183 W CN 2021136183W WO 2023102749 A1 WO2023102749 A1 WO 2023102749A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target image
reference image
difference
target
Prior art date
Application number
PCT/CN2021/136183
Other languages
French (fr)
Chinese (zh)
Inventor
倪成
陈伟梁
李博
傅费超
汪鹏
李娅宁
Original Assignee
上海联影医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联影医疗科技股份有限公司 filed Critical 上海联影医疗科技股份有限公司
Priority to PCT/CN2021/136183 priority Critical patent/WO2023102749A1/en
Priority to CN202180102179.XA priority patent/CN117940958A/en
Publication of WO2023102749A1 publication Critical patent/WO2023102749A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • This description relates to the field of medical technology, in particular to an image processing method and system.
  • CT computed tomography
  • MR magnetic resonance
  • the image processing system includes a storage device and a processor, the storage device is used to store computer instructions; the processor is used to connect with the storage device.
  • the processor When executing the computer instructions, the processor causes the system to perform the following operations: acquire a target image of the object; acquire at least one reference image of the object, the at least one reference image corresponds to a different imaging device an imaging device corresponding to the target image; and correcting the target image based on the at least one reference image.
  • One of the embodiments of this specification provides an image processing method.
  • the method includes: acquiring a target image of the object; acquiring at least one reference image of the object, the imaging device corresponding to the at least one reference image is different from the imaging device corresponding to the target image; and based on the at least one reference image A reference image is used to correct the target image.
  • One of the embodiments of the present specification provides a computer-readable storage medium, the storage medium stores computer instructions, and when a computer reads the computer instructions, the computer executes an image processing method.
  • the method includes: acquiring a target image of the object; acquiring at least one reference image of the object, the imaging device corresponding to the at least one reference image is different from the imaging device corresponding to the target image; and based on the at least one reference image A reference image is used to correct the target image.
  • One of the embodiments of this specification provides an image processing system.
  • the system includes: a target image acquisition module, used to acquire a target image of the object; a reference image acquisition module, used to acquire at least one reference image of the object, and the imaging device corresponding to the at least one reference image is different from the an imaging device corresponding to the target image; and a correction module, configured to correct the target image based on the at least one reference image.
  • Fig. 1 is a schematic diagram of an application scenario of an exemplary image processing system according to some embodiments of this specification
  • Fig. 2 is a block diagram of an exemplary image processing system according to some embodiments of the present specification
  • Fig. 3 is a flow chart of an exemplary image processing method according to some embodiments of this specification.
  • Fig. 4A is a schematic diagram of an exemplary reference image according to some embodiments of the present specification.
  • Fig. 4B is a schematic diagram of an exemplary target image according to some embodiments of the present specification.
  • FIG. 4C is a schematic diagram of an exemplary correction process according to some embodiments of the present specification.
  • Fig. 5A is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification
  • Fig. 5B is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification
  • Fig. 6A is a flowchart of an exemplary image processing method according to some embodiments of the present specification.
  • Fig. 6B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification.
  • Fig. 7A is a flowchart of an exemplary image processing method according to some embodiments of the present specification.
  • Fig. 7B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification.
  • FIG. 8A is a flow chart of an exemplary image processing method according to some embodiments of the present specification.
  • Fig. 8B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies of different levels.
  • the words may be replaced by other expressions if other words can achieve the same purpose.
  • an image processing system 100 may include an imaging device 110 , a network 120 , a terminal device 130 , a processing device 140 and a storage device 150 .
  • Multiple components in the image processing system 100 may be connected to each other through a network 120 .
  • the imaging device 110 and the terminal device 130 may be connected or communicate through the network 120 .
  • the imaging device 110 and the processing device 140 may be connected or communicate through the network 120 .
  • connections between components in image processing system 100 are variable.
  • the terminal device 130 may be directly connected to the processing device 140 .
  • the imaging device 110 may be used to scan an object in a detection area or a scanning area to obtain imaging data of the object.
  • objects may include biological objects and/or non-biological objects.
  • an object may be animate or inanimate organic and/or inorganic matter.
  • imaging device 110 may be a non-invasive imaging device used for disease diagnosis or research purposes.
  • imaging device 110 may include a single modality scanner and/or a multimodal scanner.
  • Single modality scanners may include, for example, ultrasound scanners, X-ray scanners, computed tomography (CT) scanners, magnetic resonance imaging (MRI) scanners, sonography, positron emission tomography (PET) scanners , optical coherence tomography (OCT) scanner, ultrasound (US) scanner, intravascular ultrasound (IVUS) scanner, near-infrared spectroscopy (NIRS) scanner, far-infrared (FIR) scanner, etc., or any combination thereof.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • OCT optical coherence tomography
  • US ultrasound
  • IVUS intravascular ultrasound
  • NIRS near-infrared spectroscopy
  • FIR far-infrared
  • Multimodal scanners may include, for example, X-ray imaging-magnetic resonance imaging (X-ray-MRI) scanners, positron emission tomography-X-ray imaging (PET-X-ray) scanners, single photon emission computed tomography-MRI Resonance imaging (SPECT-MRI) scanner, positron emission tomography-computed tomography (PET-CT) scanner, digital subtraction angiography-magnetic resonance imaging (DSA-MRI) scanner, etc. or any combination thereof.
  • X-ray imaging-magnetic resonance imaging X-ray-MRI
  • PET-X-ray positron emission tomography-X-ray imaging
  • SPECT-MRI single photon emission computed tomography-MRI Resonance imaging
  • PET-CT positron emission tomography-computed tomography
  • DSA-MRI digital subtraction angiography-magnetic resonance imaging
  • the imaging device 110 may include an MRI scanner 111 and a CT scanner 112, wherein the MRI scanner 111 may be used to acquire MRI data of a subject to generate an MR image, and the CT scanner 112 may be used to acquire CT data of a subject , to generate a CT image.
  • the MRI scanner 111 may be used to acquire MRI data of a subject to generate an MR image
  • the CT scanner 112 may be used to acquire CT data of a subject , to generate a CT image.
  • Network 120 may include any suitable network capable of facilitating the exchange of information and/or data for image processing system 100 .
  • at least one component of the image processing system 100 (for example, the imaging device 110, the terminal device 130, the processing device 140, the storage device 150) can exchange information and /or data.
  • the processing device 140 may acquire imaging data of the object from the imaging device 110 through the network 120 .
  • Network 120 may include public networks (e.g., the Internet), private networks (e.g., local area networks (LANs)), wired networks, wireless networks (e.g., 802.11 networks, Wi-Fi networks), frame relay networks, virtual private networks (VPN), Satellite Network, Telephone Network, Router, Hub, Switch, Fiber Optic Network, Telecom Network, Intranet, Wireless Local Area Network (WLAN), Metropolitan Area Network (MAN), Public Switched Telephone Network (PSTN), BluetoothTM network, ZigBee TM network, Near Field Communication (NFC) network, etc. or any combination thereof.
  • network 120 may include at least one network access point.
  • network 120 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which at least one component of image processing system 100 may connect to network 120 to exchange data and/or information.
  • the terminal device 130 may communicate with and/or be connected to the imaging device 110 , the processing device 140 and/or the storage device 150 .
  • the user may interact with the imaging device 110 through the terminal device 130 to control one or more components of the imaging device 110 .
  • the terminal device 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, etc. or any combination thereof.
  • mobile device 131 may include a mobile controller handle, personal digital assistant (PDA), smartphone, etc., or any combination thereof.
  • the processing device 140 may process data and/or information acquired from the imaging device 110 , the terminal device 130 and/or the storage device 150 .
  • the processing device 140 may acquire the imaging data of the object from the imaging device 110, and determine a target image of the object and at least one reference image, wherein the imaging devices corresponding to the target image and the reference image are different (for example, respectively corresponding to MRI scanners and CT scanners).
  • the processing device 140 may correct the target image based on at least one reference image.
  • processing device 140 may be a single server or a group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 140 may be local or remote. For example, processing device 140 may access information and/or data from imaging device 110 , terminal device 130 and/or storage device 150 via network 120 . As another example, the processing device 140 may be directly connected to the imaging device 110, the terminal device 130 and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform. For example, a cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, etc., or any combination thereof.
  • processing device 140 may include one or more processors (eg, single-chip processors or multi-chip processors).
  • the processing device 140 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a graphics processing unit (GPU), a physical processing unit (PPU), a digital signal processing DSP, Field Programmable Gate Array (FPGA), Programmable Logic Device (PLD), Controller, Microcontroller Unit, Reduced Instruction Set Computer (RISC), Microprocessor, etc. or any combination thereof.
  • the processing device 140 (or all or part of its functionality) may be part of the imaging device 110 or the terminal device 130 .
  • Storage device 150 may store data, instructions and/or any other information.
  • the storage device 150 may store data acquired from the imaging device 110 , the terminal device 130 and/or the processing device 140 .
  • the storage device 150 may store imaging data acquired from the imaging device 110 and related information thereof.
  • the storage device 150 may store images (eg, target images, reference images) generated based on imaging data.
  • storage device 150 may store data and/or instructions that processing device 140 executes or uses to perform the exemplary methods described in this specification.
  • the storage device 150 may include mass storage, removable storage, volatile read-write storage, read-only memory (ROM), etc., or any combination thereof.
  • the storage device 150 can be implemented on a cloud platform.
  • the storage device 150 may be connected to the network 120 to communicate with at least one other component in the image processing system 100 (eg, the imaging device 110 , the terminal device 130 , the processing device 140 ). At least one component in the image processing system 100 can access data stored in the storage device 150 (eg, a target image of a subject, a reference image, etc.) through the network 120 . In some embodiments, storage device 150 may be part of processing device 140 .
  • Fig. 2 is a block diagram of an exemplary image processing system according to some embodiments of the present specification.
  • the image processing system 200 may include a target image acquisition module 210 , a reference image acquisition module 220 and a correction module 230 .
  • the image processing system 200 may be implemented by the processing device 140 .
  • the target image acquisition module 210 may be used to acquire target images of objects.
  • Target images can refer to images that have target features or meet target requirements. For more information about the acquisition of the target image, refer to step 310 in FIG. 3 and related descriptions.
  • the reference image acquisition module 220 can be used to acquire at least one reference image of the object.
  • the reference image can be used to correct the target image.
  • the imaging device corresponding to the reference image may be different from the imaging device corresponding to the target image.
  • the reference image and the target image may respectively correspond to different modalities of the imaging device. For more information about reference image acquisition, refer to step 320 in FIG. 3 and related descriptions.
  • the correction module 230 can be used to correct the target image based on at least one reference image.
  • the correction module 230 may preprocess the target image first, and then correct the preprocessed target image based on at least one reference image.
  • the correction module 230 may determine a difference between the target image and at least one reference image, and correct the target image based on the difference. For more information about reference image acquisition, refer to step 330 in FIG. 3 and related descriptions.
  • the image processing system 200 and its modules shown in FIG. 2 can be implemented in various ways, for example, implemented by hardware, software, or a combination of software and hardware.
  • the system and its modules in this specification can not only be realized by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. , can also be realized by software executed by various types of processors, for example, and can also be realized by a combination of the above-mentioned hardware circuits and software (for example, firmware).
  • Fig. 3 is a flowchart of an exemplary image processing method according to some embodiments of this specification.
  • the process 300 may be executed by the processing device 140 or the image processing system 200 .
  • the process 300 may be stored in a storage device (for example, the storage device 150, the storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 300.
  • process 300 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below.
  • the order of operations shown in FIG. 3 is not limiting.
  • Step 310 acquiring a target image of the object.
  • this step 310 may be performed by the processing device 140 or the image processing system 200 (eg, the target image acquisition module 210 ).
  • objects may include biological objects and/or non-biological objects.
  • an object may be animate or inanimate organic and/or inorganic matter.
  • an object may include a specific part, organ, and/or tissue of a patient.
  • an object may include a patient's brain, neck, heart, lungs, etc., or any combination thereof.
  • the target image can be an image whose target features meet the target requirements.
  • the target image may include an MR image with high soft tissue resolution accuracy.
  • the target image may include a CT image with high spatial position accuracy.
  • the target features or target requirements are related to the actual needs of images or diagnosis and treatment, and are not limited in this specification.
  • the target image may include a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D) image (eg, a time sequence of 3D images), etc., or any combination thereof.
  • the target images may include CT images, MR images, PET images, SPECT images, ultrasound images, X-ray images, etc. or any combination thereof.
  • the target image acquisition module 210 may acquire imaging data of the object from an imaging device (for example, the imaging device 110 ), and determine the target image based on the imaging data. In some embodiments, the target image acquisition module 210 can also perform corrections (for example, random correction, detector normalization, scatter correction, attenuation correction, etc.) or preprocessing operations (for example, resizing, image resampling, image normalization, etc.) to determine the target image. In some embodiments, the target image acquisition module 210 may determine the target image based on the imaging data through an image reconstruction algorithm.
  • corrections for example, random correction, detector normalization, scatter correction, attenuation correction, etc.
  • preprocessing operations for example, resizing, image resampling, image normalization, etc.
  • exemplary MR image reconstruction algorithms may include Fourier transform algorithms, back-projection algorithms (for example, convolution back-projection algorithms or filter back-projection algorithms), iterative reconstruction algorithms, and the like.
  • the target image acquisition module 210 can acquire pre-stored target images from a storage device (eg, the storage device 150 ) or an external storage device (eg, a medical image database).
  • Step 320 acquiring at least one reference image of the object.
  • this step 320 may be performed by the processing device 140 or the image processing system 200 (eg, the reference image acquisition module 220 ).
  • the reference image can be used to correct the target image.
  • features or characteristics of the reference image are at least partially different from corresponding features or characteristics of the target image.
  • a feature or characteristic of the reference image is at least partially superior to a corresponding feature or characteristic of the target image.
  • the reference image is complementary in some dimension or aspect to features or characteristics of the target image. For example, a target image has feature A and feature B, wherein feature A (ie, the target feature) has higher precision, and feature B has lower precision.
  • the reference image may be an image with higher precision of the feature B.
  • the target image has feature A, feature B, feature C, and feature D, wherein feature A (namely, the target feature) has higher accuracy, while feature B, feature C, and feature D have lower accuracy.
  • the reference image can be an image with high accuracy of feature B, feature C, and feature D; Higher precision.
  • the target image may include a magnetic resonance imaging (MRI) image with high soft tissue resolution accuracy and low spatial position accuracy
  • the reference image may include a computed tomography (CT) image with high spatial position accuracy.
  • the acquisition manner of the reference image is similar to the acquisition manner of the target image, which will not be repeated here.
  • the imaging device corresponding to the reference image may be different from the imaging device corresponding to the target image.
  • the imaging device corresponding to the target image may be an MRI device
  • the imaging device corresponding to the reference image may be an ultrasound device, X-ray device, CT device, PET device, OCT device, IVUS device, NIRS device, FIR device, etc. or any combination.
  • the reference image and the target image may respectively correspond to different modalities of the imaging device.
  • the imaging device may be an MRI-CT device
  • the target image may correspond to the MRI modality of the imaging device
  • the reference image may correspond to the CT modality of the imaging device.
  • At least one reference image may include multiple reference images.
  • multiple reference images may be acquired by different imaging devices.
  • multiple reference images may have features of different dimensions or aspects.
  • the multiple reference images may include a first reference image and a second reference image, wherein the first reference image may be acquired by a CT device, and the second reference image may be acquired by a PET device.
  • multiple reference images may be acquired by the same imaging device.
  • multiple reference images may be collected by the same imaging device based on different imaging conditions (eg, imaging angle, sampling frequency, ray intensity, magnetic field intensity, etc.). For example, multiple reference images may all be collected by CT equipment, but their corresponding imaging angles are different.
  • reference images corresponding to different dimensions or different aspects can be collected, and correspondingly, target images can be corrected from different dimensions or different aspects to improve the multi-dimensional correction effect.
  • Step 330 based on at least one reference image, correct the target image.
  • this step 330 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • the processing device 140 may preprocess the target image first, and then correct the preprocessed target image based on at least one reference image.
  • the processing device 140 may preprocess the target image through a compensation algorithm.
  • Exemplary compensation algorithms may include 3D phase unwarping methods (eg, path-guided methods, least-p-norm methods, etc.), 2D phase unwarping methods, and the like.
  • the processing device 140 may obtain a B0 field correction matrix in advance, and preprocess the target image by using the B0 field correction matrix.
  • preprocessing may also include adjusting resolution, adjusting contrast, adjusting signal-to-noise ratio, spatial domain processing (eg, smoothing, edge detection, sharpening, etc.), etc., or any combination thereof.
  • processing device 140 may determine a difference between the target image and at least one reference image, and correct the target image based on the difference.
  • the difference may reflect the difference between the target image and at least one reference image in various dimensions or aspects (eg, spatial position, gray value, gradient value, resolution, brightness, etc.). In some embodiments, the difference may reflect the difference between the target image and the at least one reference image in angles of features other than target features. For example, a target image has a feature A and a feature B, wherein feature A has a higher accuracy and feature B has a lower accuracy.
  • the reference image may be an image with higher accuracy of the feature B. Accordingly, the difference may be the difference between the feature B of the target image and the feature B of the reference image.
  • the difference can be represented by numerical values, vectors, matrices, models, images, and the like.
  • the difference may include a spatial position difference of the same location point of the object in the target image and at least one reference image.
  • the processing device 140 may map the target image and the at least one reference image to the same spatial coordinate system, and determine the difference in the spatial coordinates of the same position point of the object in the target image and the at least one reference image (for example, between the spatial coordinates difference).
  • FIG. 4A is a schematic diagram of an exemplary reference image shown according to some embodiments of this specification
  • FIG. 4B is a schematic diagram of an exemplary target image shown according to some embodiments of this specification.
  • the dots in FIG. The dots in 4B are in one-to-one correspondence, the dots corresponding to each other correspond to the same position of the object, and the difference in coordinates between the dots corresponding to each other is the difference between the target image and the reference image.
  • the processing device 140 can register the at least one reference image and the target image, and determine the at least one reference image and the target image based on the first registration result of the at least one reference image and the target image. difference. For example, after registering the at least one reference image and the target image, the processing device 140 may directly perform subtraction processing on the registered at least one reference image and the target image, so as to determine the difference between the two. For another example, after registering at least one reference image and the target image, the processing device 140 may directly perform division processing on the registered at least one reference image and the target image, so as to determine the difference between the two.
  • exemplary image registration algorithms may include mean absolute difference (MAD), sum of absolute error (SAD), sum of squared error (SSD), sum of squared error (MSD), normalization Product correlation algorithm (NCC), sequential similarity detection algorithm (SSDA), local gray value coding algorithm, etc. or any combination thereof.
  • MAD mean absolute difference
  • SAD sum of absolute error
  • SSD sum of squared error
  • MSD sum of squared error
  • NCC normalization Product correlation algorithm
  • SSDA sequential similarity detection algorithm
  • local gray value coding algorithm etc. or any combination thereof.
  • the processing device 140 may input the at least one reference image and the target image into the first model, and based on the output of the first model, determine the difference of the at least one reference image and the target image.
  • the first model please refer to FIG. 5A and its related description.
  • the processing device 140 may determine the point difference (e.g., spatial location difference, gray value difference) of the location point in the target image and at least one reference image. , gradient value difference, resolution difference, brightness difference, etc.), and based on the multiple point differences corresponding to the multiple position points, a difference model is constructed, and the difference model can reflect the difference between at least one reference image and the target image. See Figure 5B and its associated description for more on the difference-based model.
  • the point difference e.g., spatial location difference, gray value difference
  • the difference model can reflect the difference between at least one reference image and the target image. See Figure 5B and its associated description for more on the difference-based model.
  • the processing device 140 may adjust the corresponding features of the target image by difference to correct the target image.
  • the target image has feature A and feature B, wherein feature A has higher precision, and feature B has lower precision.
  • the reference image may be an image with higher accuracy of the feature B.
  • the difference may be the difference between feature B of the target image and feature B of the reference image.
  • the processing device 140 can adjust the feature B of the target image based on the difference so that it is close to the feature B of the reference image.
  • FIG. 4C is a schematic diagram of an exemplary correction process according to some embodiments of the present specification.
  • the arrows in FIG. 4C represent the coordinate differences of multiple position points of the object between at least one reference image and the target image,
  • the processing device 140 can adjust the coordinate values of each point in the target image based on the coordinate difference, thereby improving the accuracy of its spatial position.
  • the processing device 140 may input the difference and the target image into the machine learning model, and based on the output of the machine learning model, determine the corrected target image.
  • At least one reference image may include multiple reference images.
  • Processing device 140 may determine differences between the target image and each of the plurality of reference images. The processing device 140 may perform comprehensive processing on the differences respectively corresponding to the multiple reference images. For example, the processing device 140 may perform weighting processing on differences respectively corresponding to multiple reference images. Further, the processing device 140 may correct the target image based on the comprehensive processing result. For more information about comprehensively processing the differences corresponding to multiple reference images, please refer to FIG. 6A, FIG. 6B and their related descriptions.
  • processing device 140 may determine differences between the target image and each of the plurality of reference images.
  • the processing device 140 may correct the target image to determine an intermediate corrected image based on the difference. Further, the processing device 140 may determine the corrected target image based on the multiple intermediate corrected images. For more information on determining the intermediate corrected image, refer to FIG. 7A, FIG. 7B and their related descriptions.
  • the processing device 140 can register the at least one reference image and the target image, and determine the corrected target image based on the second registration result of the at least one reference image and the target image. For example, after registering the at least one reference image and the target image, the processing device 140 may directly perform subtraction processing on the registered at least one reference image and the target image, so as to determine the corrected target image. For another example, after registering the at least one reference image and the target image, the processing device 140 may directly perform division processing on the registered at least one reference image and the target image, so as to determine the corrected target image.
  • the processing device 140 may input at least one reference image and the target image into the second model. And based on the output of the second model, the corrected target image is determined. For more information about the second model, refer to FIG. 8A, FIG. 8B and their related descriptions.
  • the target image may be corrected based on a difference between at least one reference image of the object and the target image, wherein features or characteristics of the reference image are at least partially better than corresponding features or characteristics of the target image.
  • features or characteristics of the reference image are at least partially better than corresponding features or characteristics of the target image.
  • other features of the target image can be adjusted to be close to the corresponding features of the reference image, thereby enriching the features of the target image, improving the image quality of the target image, and providing comprehensive image information to improve subsequent possible therapeutic effects.
  • Fig. 5A is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification.
  • a target image 510 and at least one reference image 515 may be input into a first model 520, and based on the output of the first model 520, the difference between the target image and at least one reference image may be determined 530.
  • the first model 520 may be a convolutional neural network model (Convolutional Neural Network, CNN), a deep neural network model (Deep Neural Network, DNN), a recurrent neural network model (Recurrent Neural Network, RNN), a graph Neural network model (Graph Neural Network, GNN), Generative Adversarial Network model (Generative Adversarial Network, GAN), etc. or any combination thereof.
  • Convolutional Neural Network CNN
  • DNN deep neural network model
  • Recurrent Neural Network RNN
  • GNN graph Neural network model
  • Generative Adversarial Network model Generative Adversarial Network, GAN
  • the first model 520 may be determined based on multiple sets of first training samples 540 through training.
  • Each set of first training samples 540 may include a sample target image 541 of the sample object, at least one sample reference image 542 of the sample object, and a corresponding sample difference 543, wherein the sample target image 541 and at least one sample reference image 542 are training data, and the corresponding sample difference 543 is a label.
  • the sample target image 541 and at least one sample reference image 542 correspond to different imaging devices. In some embodiments, at least one sample reference image 542 corresponds to a different imaging device.
  • the relationship between the sample target image 541 and the sample reference image 542 is similar to the relationship between the target image and the reference image, and a more specific description can be found in FIG. 3 .
  • the sample difference 543 between the sample target image 541 and at least one sample reference image 542 can be marked manually by a user, or automatically marked by the image processing system 100 .
  • the processing device 140 takes the sample target image 541 and at least one sample reference image 542 as input, and uses the corresponding sample difference 543 as supervision to train the first model 520, by machine
  • the learning algorithm for example, stochastic gradient descent method
  • the first loss function may be a perceptual loss function. In some embodiments, the first loss function may also be other loss functions, for example, a square loss function, a logistic regression loss function, and the like.
  • Fig. 5B is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification.
  • the processing device 140 may determine the point difference 570 of the location point in the target image 560 and at least one reference image 565, and Based on the multiple point differences corresponding to the multiple position points, the difference 590 between the target image 560 and at least one reference image 565 is determined by constructing a difference model 580 .
  • point differences may include spatial position differences, gray value differences, gradient value differences, resolution differences, brightness differences, etc. or any combination thereof.
  • the processing device 140 may map the target image and at least one reference image to the same spatial coordinate system, and determine the spatial coordinate difference ( For example, the difference between spatial coordinates).
  • the coordinates of the position point in the target image 560 are (x a , y a , z a ), and the coordinates of the position point in at least one reference image 565 are (x b , y b , z b ), then the spatial position difference d of the position point in the target image 560 and at least one reference image 565 can be determined based on the following formula (1) or formula (2):
  • the processing device 140 may construct a difference model 580 based on a plurality of point differences respectively corresponding to a plurality of position points, so as to determine a difference 590 between at least one reference image and the target image.
  • the processing device 140 may construct the difference model 590 through a data modeling method.
  • the processing device 140 may construct the difference model 590 through a three-dimensional modeling method.
  • the processing device 140 can construct the difference model 590 by principal component analysis, simulation, regression analysis, cluster analysis and other methods.
  • the difference 590 may be represented by numerical values, vectors, matrices, models, images, and the like.
  • the processing device 140 can determine the spatial position corresponding to the intermediate position point between adjacent position points by interpolation. difference, so as to determine the spatial position difference corresponding to a large number of position points.
  • the processing device 140 can build a difference model 580 (for example, a vector diagram as shown in FIG. 4C ) based on a large number of spatial position differences, and the difference model 580 can reflect the spatial position difference between at least one reference image and the target image as a whole.
  • Fig. 6A is a flowchart of an exemplary image processing method according to some embodiments of the present specification.
  • the process 600 may be executed by the processing device 140 or the image processing system 200 .
  • the process 600 may be stored in a storage device (for example, the storage device 150, the storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 600.
  • process 600 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below.
  • the order of operations shown in FIG. 6A is not limiting.
  • correcting the target image described in operation 330 in FIG. 3 may be performed according to process 600 .
  • Step 610 determine the difference between the target image and each of the plurality of reference images (difference 660, difference 665, etc. as shown in FIG. 6B). In some embodiments, this step 610 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • the difference may reflect the difference between the target image and each of the multiple reference images in various dimensions or aspects (for example, spatial position, gray value, gradient value, resolution, brightness, etc.).
  • the processing device 140 may determine the difference between the target image and each of the plurality of reference images through the first model or the difference model. For more information on determining the difference, refer to FIG. 5A, FIG. 5B and their related descriptions.
  • Step 620 perform comprehensive processing on the differences corresponding to the multiple reference images.
  • this step 620 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • the processing device 140 may perform weighting processing on the differences respectively corresponding to the multiple reference images. For example, the processing device 140 may assign different weights to the differences corresponding to the multiple reference images, and determine the overall difference corresponding to the multiple reference images. In some embodiments, the weight of the difference corresponding to each reference image may be determined by the user or determined by the image processing system 100 according to image processing requirements. For example, the target image has feature A, feature B, feature C, and feature D, wherein, feature A (namely, the target feature) has higher accuracy, while feature B, feature C, and feature D have lower accuracy.
  • the reference image may include three, feature B, feature C, and feature D with relatively high precision.
  • the difference between each reference image and the target image may include a feature B difference, a feature C difference, and a feature C difference.
  • the final image processing goal is to correct feature B first, then feature C, and then feature D, correspondingly, the weights of the differences corresponding to the three reference images are sequentially reduced.
  • the processing device 140 may determine the overall difference corresponding to multiple reference images based on the following formula (3):
  • d t represents the overall difference corresponding to multiple reference images
  • r i represents the weight of the difference corresponding to the i-th reference image
  • d i represents the difference corresponding to the i-th reference image
  • n represents the total number of multiple reference images
  • n is a positive integer.
  • the processing device 140 may perform average processing or weighted average processing on differences respectively corresponding to multiple reference images.
  • multiple reference images may correspond to the same imaging device or to the same imaging condition, but due to inevitable systematic errors, certain differences or deviations inevitably exist among the multiple reference images.
  • the errors can be balanced, thereby improving the subsequent correction effect.
  • Step 630 based on the comprehensive processing result (such as the comprehensive processing result 670 shown in FIG. 6B), determine the corrected target image (the corrected target image 680 shown in FIG. 6B). In some embodiments, this step 630 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • the processing device 140 can comprehensively adjust the corresponding features of the target image to make it close to the corresponding features of the reference image by integrating the processing results, so as to correct the target image.
  • the integrated processing results may be represented in the form of numerical values, vectors, matrices, models, images, and the like.
  • the processing device 140 may adjust pixel values or related information of each pixel of the target image based on the matrix corresponding to the comprehensive processing result, so as to correct the target image.
  • Fig. 7A is a flowchart of an exemplary image processing method according to some embodiments of the present specification.
  • the process 700 may be executed by the processing device 140 or the image processing system 200 .
  • the process 700 may be stored in a storage device (for example, storage device 150, a storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 700.
  • process 700 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below.
  • the order of operations shown in FIG. 7A is not limiting.
  • correcting the target image described in operation 330 in FIG. 3 may be performed according to process 700 .
  • Step 710 determining the difference between the target image and each of the plurality of reference images.
  • this step 710 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • processing device 140 may determine difference 770 between target image 750 and reference image 760 , difference 775 between target image 755 and reference image 765 , and so on. For more information on determining the difference, refer to FIG. 5A, FIG. 5B and their related descriptions.
  • Step 720 based on the difference, correct the target image to determine an intermediate corrected image.
  • this step 720 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • processing device 140 may correct target image 750 based on difference 770 to determine intermediate corrected image 780; processing device 140 may correct target image 755 based on difference 775 to determine intermediate corrected image 785;
  • the processing device 140 can adjust the corresponding features of the target image to be close to the corresponding features of the reference image through the difference, so as to correct the target image.
  • Step 330 or step 630 can be seen for correcting the content of the target image based on the difference.
  • Step 730 Determine a corrected target image based on the multiple intermediate corrected images.
  • this step 730 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • processing device 140 may determine final corrected target image 790 based on intermediate corrected image 780, intermediate corrected image 785, and the like.
  • the processing device 140 may perform weighting processing on multiple intermediate corrected images. For example, the processing device 140 may assign different weights to the multiple intermediate corrected images, and perform weighting processing based on the weights.
  • the weight of each intermediate corrected image can be determined by the user or determined by the image processing system 100 according to image processing requirements.
  • the target image has feature A, feature B, feature C, and feature D, wherein, feature A (namely, the target feature) has higher accuracy, while feature B, feature C, and feature D have lower accuracy.
  • the reference image may include three, feature B, feature C, and feature D with relatively high precision.
  • the target image is corrected based on the difference between each reference image and the target image, and an intermediate corrected image B, an intermediate corrected image C, and an intermediate corrected image D are respectively obtained.
  • the final image processing goal is to correct feature B first, then feature C, and then feature D, correspondingly, the weights of the differences corresponding to the three intermediate corrected images are sequentially reduced.
  • the processing device 140 may determine the final corrected target image based on the following formula (4):
  • I t represents the final corrected target image
  • W i represents the weight corresponding to the i-th intermediate corrected image
  • R i represents the i-th intermediate corrected image
  • n represents the total number of multiple reference images (or intermediate corrected images) , n is a positive integer.
  • the processing device 140 may perform average processing or weighted average processing on the multiple intermediate corrected images respectively corresponding to the multiple reference images.
  • multiple reference images may correspond to the same imaging device or to the same imaging conditions, but due to unavoidable systematic errors, certain differences or deviations inevitably exist among the multiple intermediate corrected images.
  • the errors can be balanced, thereby improving the subsequent correction effect.
  • the calculation amount of a single correction can be reduced, and the different weights of different features to be corrected can be considered. Possible systematic errors can be equalized, thereby improving the image correction effect.
  • Fig. 8A is a flowchart of an exemplary image processing method according to some embodiments of the present specification.
  • the process 800 may be executed by the processing device 140 or the image processing system 200 .
  • the process 800 may be stored in a storage device (for example, the storage device 150, the storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 800.
  • process 800 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below.
  • the order of operations as shown in FIG. 8A is not limiting.
  • correcting the target image described in operation 330 in FIG. 3 may be performed according to process 800 .
  • Step 810 input at least one reference image and target image into the second model.
  • this step 810 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • the processing device 140 or the image processing system 200 eg, the correction module 230 .
  • Step 820 based on the output of the second model, determine the corrected target image.
  • this step 820 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
  • Fig. 8B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification.
  • the target image 860 and at least one reference image 865 may be input into the second model 870 to obtain a corrected target image 880 .
  • the first model 870 may be a convolutional neural network model (Convolutional Neural Network, CNN), a deep neural network model (Deep Neural Network, DNN), a recurrent neural network model (Recurrent Neural Network, RNN), a graph Neural network model (Graph Neural Network, GNN), Generative Adversarial Network model (Generative Adversarial Network, GAN), etc. or any combination thereof.
  • Convolutional Neural Network CNN
  • DNN deep neural network model
  • Recurrent Neural Network RNN
  • GNN graph Neural network model
  • Generative Adversarial Network model Generative Adversarial Network, GAN
  • the second model 870 may be determined based on multiple sets of second training samples 890 for training.
  • Each set of second training samples 890 may include a sample target image 891 of the sample object, at least one sample reference image 892 of the sample object and a corresponding sample corrected image 893, wherein the sample target image 891 and at least one sample reference image 892 are For the training data, the corresponding sample corrected image 893 is a label.
  • the sample target image 891 and the at least one sample reference image 892 correspond to different imaging devices. In some embodiments, at least one sample reference image 892 corresponds to a different imaging device.
  • the relationship between the sample target image 891 and the sample reference image 892 is similar to the relationship between the target image and the reference image, and a more specific description can be found in FIG. 3 .
  • the corresponding sample corrected image 893 between the sample target image 891 and at least one sample reference image 892 can be manually marked by a user (eg, manually modified or edited by a doctor), or can be automatically marked by the image processing system 100 .
  • the processing device 140 takes the sample target image 891 and at least one sample reference image 892 as input, and uses the corresponding sample corrected image 893 as supervision to train the second model 870 by
  • the learning algorithm for example, stochastic gradient descent method
  • the second loss function may be a perceptual loss function. In some embodiments, the second loss function may also be other loss functions, for example, a square loss function, a logistic regression loss function, and the like.
  • the target image and the reference image are acquired through different imaging methods, and the target image is corrected based on the difference between at least one reference image and the target image, which can be adjusted while retaining the target features of the target image.
  • Other features of the target image are close to the corresponding features of the reference image, thereby enriching the features of the target image and improving the image quality of the target image;
  • determining the difference between the reference image and the target image by using a machine learning model or constructing a difference model, which can Improve the accuracy, efficiency and comprehensiveness of determining the difference; (3) through the comprehensive processing of multiple reference images, different image processing requirements can be met, and the image correction effect can be improved.
  • Some embodiments of this specification also provide an image processing device, which includes: at least one storage medium storing computer instructions; at least one processor executing the computer instructions to implement the image processing method described in this specification.
  • an image processing device which includes: at least one storage medium storing computer instructions; at least one processor executing the computer instructions to implement the image processing method described in this specification.
  • Some embodiments of this specification also provide a computer-readable storage medium, which stores computer instructions.
  • a computer reads the computer instructions, the computer executes the image processing method described in this specification.
  • FIG. 1 to FIG. 8B are not repeated here.
  • numbers describing the quantity of components and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "about”, “approximately” or “substantially” in some examples. grooming. Unless otherwise stated, “about”, “approximately” or “substantially” indicates that the stated figure allows for a variation of ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that can vary depending upon the desired characteristics of individual embodiments. In some embodiments, numerical parameters should take into account the specified significant digits and adopt the general digit reservation method. Although the numerical ranges and parameters used in some embodiments of this specification to confirm the breadth of the range are approximations, in specific embodiments, such numerical values are set as precisely as practicable.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Embodiments of the present description provide an image processing method and system. The method comprises: obtaining a target image of an object; obtaining at least one reference image of the object, the imaging device corresponding to the at least one reference image being different from the imaging device corresponding to the target image; and correcting the target image on the basis of the at least one reference image.

Description

一种图像处理方法和系统A kind of image processing method and system 技术领域technical field
本说明书涉及医疗技术领域,尤其涉及一种图像处理方法和系统。This description relates to the field of medical technology, in particular to an image processing method and system.
背景技术Background technique
近年来,医学成像广泛用于各种医学病症的诊断和治疗。不同的成像方式有不同的优势与劣势。例如,计算机断层扫描(CT)获取的图像空间位置精度高,但是对软组织分辨能力差;磁共振(MR)获取的图像空间位置精度较低,但是对软组织分辨能力好。因此,提供一种能够综合不同成像方式的优势的图像处理方式及系统是很有必要的。In recent years, medical imaging has been widely used in the diagnosis and treatment of various medical conditions. Different imaging methods have different advantages and disadvantages. For example, images acquired by computed tomography (CT) have high spatial position accuracy, but poor soft tissue resolution; magnetic resonance (MR) images have low spatial position accuracy, but have good soft tissue resolution. Therefore, it is necessary to provide an image processing method and system that can combine the advantages of different imaging methods.
发明内容Contents of the invention
本说明书实施例之一提供一种图像处理系统。所述图像处理系统包括存储设备以及处理器,所述存储设备用于存储计算机指令;所述处理器用于与所述存储设备相连接。当执行所述计算机指令时,所述处理器使所述系统执行下述操作:获取对象的目标图像;获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及基于所述至少一幅参考图像,校正所述目标图像。One of the embodiments of this specification provides an image processing system. The image processing system includes a storage device and a processor, the storage device is used to store computer instructions; the processor is used to connect with the storage device. When executing the computer instructions, the processor causes the system to perform the following operations: acquire a target image of the object; acquire at least one reference image of the object, the at least one reference image corresponds to a different imaging device an imaging device corresponding to the target image; and correcting the target image based on the at least one reference image.
本说明书实施例之一提供一种图像处理方法。所述方法包括:获取对象的目标图像;获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及基于所述至少一幅参考图像,校正所述目标图像。One of the embodiments of this specification provides an image processing method. The method includes: acquiring a target image of the object; acquiring at least one reference image of the object, the imaging device corresponding to the at least one reference image is different from the imaging device corresponding to the target image; and based on the at least one reference image A reference image is used to correct the target image.
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取所述计算机指令,所述计算机执行一种图像处理方法。所述方法包括:获取对象的目标图像;获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及基于所述至少一幅参考图像,校正所述目标图像。One of the embodiments of the present specification provides a computer-readable storage medium, the storage medium stores computer instructions, and when a computer reads the computer instructions, the computer executes an image processing method. The method includes: acquiring a target image of the object; acquiring at least one reference image of the object, the imaging device corresponding to the at least one reference image is different from the imaging device corresponding to the target image; and based on the at least one reference image A reference image is used to correct the target image.
本说明书实施例之一提供一种图像处理系统。所述系统包括:目标图像获取模块,用于获取对象的目标图像;参考图像获取模块,用于获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及校正模块,用于基于所述至少一幅参考图像,校正所述目标图像。One of the embodiments of this specification provides an image processing system. The system includes: a target image acquisition module, used to acquire a target image of the object; a reference image acquisition module, used to acquire at least one reference image of the object, and the imaging device corresponding to the at least one reference image is different from the an imaging device corresponding to the target image; and a correction module, configured to correct the target image based on the at least one reference image.
附图说明Description of drawings
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:This specification will be further illustrated by way of exemplary embodiments, which will be described in detail with the accompanying drawings. These examples are non-limiting, and in these examples, the same number indicates the same structure, wherein:
图1是根据本说明书一些实施例所示的示例性图像处理系统的应用场景示意图;Fig. 1 is a schematic diagram of an application scenario of an exemplary image processing system according to some embodiments of this specification;
图2是根据本说明书一些实施例所示的示例性图像处理系统的模块图;Fig. 2 is a block diagram of an exemplary image processing system according to some embodiments of the present specification;
图3是根据本说明书一些实施例所示的示例性图像处理方法的流程图;Fig. 3 is a flow chart of an exemplary image processing method according to some embodiments of this specification;
图4A是根据本说明书一些实施例所示的示例性参考图像的示意图;Fig. 4A is a schematic diagram of an exemplary reference image according to some embodiments of the present specification;
图4B是根据本说明书一些实施例所示的示例性目标图像的示意图;Fig. 4B is a schematic diagram of an exemplary target image according to some embodiments of the present specification;
图4C是根据本说明书一些实施例所示的示例性校正过程的示意图;FIG. 4C is a schematic diagram of an exemplary correction process according to some embodiments of the present specification;
图5A是根据本说明书一些实施例所示的确定目标图像与参考图像间差异的示例性方法的示意图;Fig. 5A is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification;
图5B是根据本说明书一些实施例所示的确定目标图像与参考图像间差异的示例性方法的示意图;Fig. 5B is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification;
图6A是根据本说明书一些实施例所示的示例性图像处理方法的流程图;Fig. 6A is a flowchart of an exemplary image processing method according to some embodiments of the present specification;
图6B是根据本说明书一些实施例所示的示例性图像处理方法的示意图;Fig. 6B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification;
图7A是根据本说明书一些实施例所示的示例性图像处理方法的流程图;Fig. 7A is a flowchart of an exemplary image processing method according to some embodiments of the present specification;
图7B是根据本说明书一些实施例所示的示例性图像处理方法的示意图;Fig. 7B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification;
图8A是根据本说明书一些实施例所示的示例性图像处理方法的流程图;以及8A is a flow chart of an exemplary image processing method according to some embodiments of the present specification; and
图8B是根据本说明书一些实施例所示的示例性图像处理方法的示意图。Fig. 8B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification.
具体实施方式Detailed ways
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the following briefly introduces the drawings that need to be used in the description of the embodiments. Apparently, the accompanying drawings in the following description are only some examples or embodiments of this specification, and those skilled in the art can also apply this specification to other similar scenarios. Unless otherwise apparent from context or otherwise indicated, like reference numerals in the figures represent like structures or operations.
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, parts or assemblies of different levels. However, the words may be replaced by other expressions if other words can achieve the same purpose.
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。As indicated in the specification and claims, the terms "a", "an", "an" and/or "the" are not specific to the singular and may include the plural unless the context clearly indicates an exception. Generally speaking, the terms "comprising" and "comprising" only suggest the inclusion of clearly identified steps and elements, and these steps and elements do not constitute an exclusive list, and the method or device may also contain other steps or elements.
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。The flowchart is used in this specification to illustrate the operations performed by the system according to the embodiment of this specification. It should be understood that the preceding or following operations are not necessarily performed in the exact order. Instead, various steps may be processed in reverse order or simultaneously. At the same time, other operations can be added to these procedures, or a certain step or steps can be removed from these procedures.
图1是根据本说明书一些实施例所示的示例性图像处理系统的应用场景示意图。如图1所示,在一些实施例中,图像处理系统100中可以包括成像设备110、网络120、终端设备130、处理设备140以及存储设备150。图像处理系统100中的多个组件之间可以通过网络120互相连接。例如,成像设备110和终端设备130可以通过网络120连接或通信。又例如,成像设备110和处理设备140可以通过网络120连接或通信。在一些实施例中,图像处理系统100中的部件之间的连接是可变的。例如,终端设备130可以直接与处理设备140连接。Fig. 1 is a schematic diagram of an application scenario of an exemplary image processing system according to some embodiments of the present specification. As shown in FIG. 1 , in some embodiments, an image processing system 100 may include an imaging device 110 , a network 120 , a terminal device 130 , a processing device 140 and a storage device 150 . Multiple components in the image processing system 100 may be connected to each other through a network 120 . For example, the imaging device 110 and the terminal device 130 may be connected or communicate through the network 120 . For another example, the imaging device 110 and the processing device 140 may be connected or communicate through the network 120 . In some embodiments, connections between components in image processing system 100 are variable. For example, the terminal device 130 may be directly connected to the processing device 140 .
成像设备110可以用于对检测区域或扫描区域内的对象进行扫描,得到该对象的成像数据。在一些实施例中,对象可以包括生物对象和/或非生物对象。例如,对象可以是有生命或无生命的有机和/或无机物质。The imaging device 110 may be used to scan an object in a detection area or a scanning area to obtain imaging data of the object. In some embodiments, objects may include biological objects and/or non-biological objects. For example, an object may be animate or inanimate organic and/or inorganic matter.
在一些实施例中,成像设备110可以是用于疾病诊断或研究目的的非侵入性成像装置。例如,成像设备110可以包括单模态扫描仪和/或多模态扫描仪。单模态扫描仪可以包括例如超声波扫描仪、X射线扫描仪、计算机断层扫描(CT)扫描仪、核磁共振成像(MRI)扫描仪、超声检查仪、正电子发射计算机断层扫描(PET)扫描仪、光学相干断层扫描(OCT)扫描仪、超声(US)扫描仪、血管内超声(IVUS)扫描仪、近红外光谱(NIRS)扫描仪、远红外(FIR)扫描仪等或其任意组合。多模态扫描仪可以包括例如X射线成像-核磁共振成像(X射线-MRI)扫描仪、正电子发射断层扫描-X射线成像(PET-X射线)扫描仪、单光子发射计算机断层扫描-核磁共振成像(SPECT-MRI)扫描仪、正电子发射断层扫描-计算机断层摄影(PET-CT)扫描仪、数字减影血管造影-核磁共振成像(DSA-MRI)扫描仪等或其任意组合。上述扫描仪仅用于说明目的,而无意限制本说明书的范围。作为示例,成像设备110可以包括MRI扫描仪111和CT扫描仪112,其中,MRI扫描仪111可以用于获取对象的MRI数据,以生成MR图像,CT扫描仪112可以用于获取对象的CT数据,以生成CT图像。In some embodiments, imaging device 110 may be a non-invasive imaging device used for disease diagnosis or research purposes. For example, imaging device 110 may include a single modality scanner and/or a multimodal scanner. Single modality scanners may include, for example, ultrasound scanners, X-ray scanners, computed tomography (CT) scanners, magnetic resonance imaging (MRI) scanners, sonography, positron emission tomography (PET) scanners , optical coherence tomography (OCT) scanner, ultrasound (US) scanner, intravascular ultrasound (IVUS) scanner, near-infrared spectroscopy (NIRS) scanner, far-infrared (FIR) scanner, etc., or any combination thereof. Multimodal scanners may include, for example, X-ray imaging-magnetic resonance imaging (X-ray-MRI) scanners, positron emission tomography-X-ray imaging (PET-X-ray) scanners, single photon emission computed tomography-MRI Resonance imaging (SPECT-MRI) scanner, positron emission tomography-computed tomography (PET-CT) scanner, digital subtraction angiography-magnetic resonance imaging (DSA-MRI) scanner, etc. or any combination thereof. The scanners described above are for illustration purposes only and are not intended to limit the scope of this specification. As an example, the imaging device 110 may include an MRI scanner 111 and a CT scanner 112, wherein the MRI scanner 111 may be used to acquire MRI data of a subject to generate an MR image, and the CT scanner 112 may be used to acquire CT data of a subject , to generate a CT image.
网络120可以包括能够促进图像处理系统100的信息和/或数据交换的任何合适的网络。在一些实施例中,图像处理系统100的至少一个组件(例如,成像设备110、终端设备130、处理设备140、存储设备150)可以通过网络120与图像处理系统100中至少一个其他组件交换信息和/或数据。例如,处理设备140可以通过网络120从成像设备110获取对象的成像数据。网络120可以包括公共网络(例如,因特网)、专用网络(例如,局部区域网络(LAN))、有线网络、无线网络(例如,802.11网络、Wi-Fi网络)、帧中继网络、虚拟专用网络(VPN)、卫星网络、电话网络、路由器、集线器、交换机、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公共电话交换网络(PSTN)、蓝牙 TM网络、ZigBee TM网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络120可以包括至少一个网络接入点。例如,网络120可以包括有线和/或无线网络接入点,例如基 站和/或互联网交换点,图像处理系统100的至少一个组件可以通过接入点连接到网络120以交换数据和/或信息。 Network 120 may include any suitable network capable of facilitating the exchange of information and/or data for image processing system 100 . In some embodiments, at least one component of the image processing system 100 (for example, the imaging device 110, the terminal device 130, the processing device 140, the storage device 150) can exchange information and /or data. For example, the processing device 140 may acquire imaging data of the object from the imaging device 110 through the network 120 . Network 120 may include public networks (e.g., the Internet), private networks (e.g., local area networks (LANs)), wired networks, wireless networks (e.g., 802.11 networks, Wi-Fi networks), frame relay networks, virtual private networks (VPN), Satellite Network, Telephone Network, Router, Hub, Switch, Fiber Optic Network, Telecom Network, Intranet, Wireless Local Area Network (WLAN), Metropolitan Area Network (MAN), Public Switched Telephone Network (PSTN), BluetoothTM network, ZigBee network, Near Field Communication (NFC) network, etc. or any combination thereof. In some embodiments, network 120 may include at least one network access point. For example, network 120 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which at least one component of image processing system 100 may connect to network 120 to exchange data and/or information.
终端设备130可以与成像设备110、处理设备140和/或存储设备150通信和/或连接。例如,用户可以通过终端设备130与成像设备110进行交互,以控制成像设备110的一个或多个部件。在一些实施例中,终端设备130可以包括移动设备131、平板计算机132、笔记本电脑133等或其任意组合。例如,移动设备131可以包括移动控制手柄、个人数字助理(PDA)、智能手机等或其任意组合。The terminal device 130 may communicate with and/or be connected to the imaging device 110 , the processing device 140 and/or the storage device 150 . For example, the user may interact with the imaging device 110 through the terminal device 130 to control one or more components of the imaging device 110 . In some embodiments, the terminal device 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, etc. or any combination thereof. For example, mobile device 131 may include a mobile controller handle, personal digital assistant (PDA), smartphone, etc., or any combination thereof.
处理设备140可以处理从成像设备110、终端设备130和/或存储设备150获取的数据和/或信息。例如,处理设备140可以从成像设备110获取对象的成像数据,并确定中对象的目标图像和至少一幅参考图像,其中,目标图像和参考图形对应的成像设备不同(例如,分别对应MRI扫描仪和CT扫描仪)。又例如,处理设备140可以基于至少一幅参考图像,校正目标图像。The processing device 140 may process data and/or information acquired from the imaging device 110 , the terminal device 130 and/or the storage device 150 . For example, the processing device 140 may acquire the imaging data of the object from the imaging device 110, and determine a target image of the object and at least one reference image, wherein the imaging devices corresponding to the target image and the reference image are different (for example, respectively corresponding to MRI scanners and CT scanners). For another example, the processing device 140 may correct the target image based on at least one reference image.
在一些实施例中,处理设备140可以是单一服务器或服务器组。服务器组可以是集中式的或分布式的。在一些实施例中,处理设备140可以是本地或远程的。例如,处理设备140可以通过网络120从成像设备110、终端设备130和/或存储设备150访问信息和/或数据。又例如,处理设备140可以直接连接到成像设备110、终端设备130和/或存储设备150以访问信息和/或数据。在一些实施例中,处理设备140可以在云平台上实现。例如,云平台可以包括私有云、公共云、混合云、社区云、分布式云、云间云、多云等或其任意组合。In some embodiments, processing device 140 may be a single server or a group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 140 may be local or remote. For example, processing device 140 may access information and/or data from imaging device 110 , terminal device 130 and/or storage device 150 via network 120 . As another example, the processing device 140 may be directly connected to the imaging device 110, the terminal device 130 and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform. For example, a cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, etc., or any combination thereof.
在一些实施例中,处理设备140可以包括一个或以上处理器(例如,单芯片处理器或多芯片处理器)。仅作为示例,处理设备140可以包括中央处理单元(CPU)、专用集成电路(ASIC)、专用指令集处理器(ASIP)、图像处理单元(GPU)、物理运算处理单元(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编程逻辑器件(PLD)、控制器、微控制器单元、精简指令集计算机(RISC)、微处理器等或其任意组合。在一些实施例中,处理设备140(或其全部或部分功能)可以为成像设备110或终端设备130的一部分。In some embodiments, processing device 140 may include one or more processors (eg, single-chip processors or multi-chip processors). For example only, the processing device 140 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a graphics processing unit (GPU), a physical processing unit (PPU), a digital signal processing DSP, Field Programmable Gate Array (FPGA), Programmable Logic Device (PLD), Controller, Microcontroller Unit, Reduced Instruction Set Computer (RISC), Microprocessor, etc. or any combination thereof. In some embodiments, the processing device 140 (or all or part of its functionality) may be part of the imaging device 110 or the terminal device 130 .
存储设备150可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从成像设备110、终端设备130和/或处理设备140获取的数据。例如,存储设备150可以存储从成像设备110获取的成像数据及其相关信息。又例如,存储设备150可以存储基于成像数据生成的图像(例如,目标图像、参考图像)。在一些实施例中,存储设备150可以存储处理设备140用来执行或使用以完成本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150可以包括大容量存储器、可移动存储器、易失性读写存储器、只读存储器(ROM)等或其任意组合。在一些实施例中,存储设备150可以在云平台上实现。 Storage device 150 may store data, instructions and/or any other information. In some embodiments, the storage device 150 may store data acquired from the imaging device 110 , the terminal device 130 and/or the processing device 140 . For example, the storage device 150 may store imaging data acquired from the imaging device 110 and related information thereof. As another example, the storage device 150 may store images (eg, target images, reference images) generated based on imaging data. In some embodiments, storage device 150 may store data and/or instructions that processing device 140 executes or uses to perform the exemplary methods described in this specification. In some embodiments, the storage device 150 may include mass storage, removable storage, volatile read-write storage, read-only memory (ROM), etc., or any combination thereof. In some embodiments, the storage device 150 can be implemented on a cloud platform.
在一些实施例中,存储设备150可以连接到网络120以与图像处理系统100中的至少一个其他组件(例如,成像设备110、终端设备130、处理设备140)通信。图像处理系统100中的至少一个组件可以通过网络120访问存储设备150中存储的数据(例如,对象的目标图像、参考图像等)。在一些实施例中,存储设备150可以是处理设备140的一部分。In some embodiments, the storage device 150 may be connected to the network 120 to communicate with at least one other component in the image processing system 100 (eg, the imaging device 110 , the terminal device 130 , the processing device 140 ). At least one component in the image processing system 100 can access data stored in the storage device 150 (eg, a target image of a subject, a reference image, etc.) through the network 120 . In some embodiments, storage device 150 may be part of processing device 140 .
应该注意的是,上述描述仅出于说明性目的而提供,并不旨在限制本说明书的范围。对于本领域普通技术人员而言,在本说明书内容的指导下,可做出多种变化和修改。可以以各种方式组合本说明书描述的示例性实施例的特征、结构、方法和其他特征,以获取另外的和/或替代的示例性实施例。It should be noted that the above description is provided for illustrative purposes only and is not intended to limit the scope of this specification. Those skilled in the art can make various changes and modifications under the guidance of the contents of this specification. The features, structures, methods, and other features of the exemplary embodiments described in this specification can be combined in various ways to obtain additional and/or alternative exemplary embodiments.
图2是根据本说明书一些实施例所示的示例性图像处理系统的模块图。如图2所示,在一些实施例中,图像处理系统200可以包括目标图像获取模块210、参考图像获取模块220和校正模块230。在一些实施例中,图像处理系统200可以通过处理设备140实现。Fig. 2 is a block diagram of an exemplary image processing system according to some embodiments of the present specification. As shown in FIG. 2 , in some embodiments, the image processing system 200 may include a target image acquisition module 210 , a reference image acquisition module 220 and a correction module 230 . In some embodiments, the image processing system 200 may be implemented by the processing device 140 .
目标图像获取模块210可以用于获取对象的目标图像。目标图像可以指具有目标特征或满足目标要求的图像。关于目标图像获取的更多内容可以参考图3的步骤310及其相关描述。The target image acquisition module 210 may be used to acquire target images of objects. Target images can refer to images that have target features or meet target requirements. For more information about the acquisition of the target image, refer to step 310 in FIG. 3 and related descriptions.
参考图像获取模块220可以用于获取对象的至少一幅参考图像。参考图像可以用于校正目标图像。在一些实施例中,参考图像对应的成像设备可以不同于目标图像对应的成像设备。在一些实施例中,参考图像和目标图像可以分别对应成像设备的不同模态。关于参考图像获取的更多内容可以参考图3的步骤320及其相关描述。The reference image acquisition module 220 can be used to acquire at least one reference image of the object. The reference image can be used to correct the target image. In some embodiments, the imaging device corresponding to the reference image may be different from the imaging device corresponding to the target image. In some embodiments, the reference image and the target image may respectively correspond to different modalities of the imaging device. For more information about reference image acquisition, refer to step 320 in FIG. 3 and related descriptions.
校正模块230可以用于基于至少一幅参考图像,校正目标图像。在一些实施例中,校正模块230可以先预处理目标图像,再基于至少一幅参考图像,校正预处理后的目标图像。在一些实施例中,校正模块230可以确定目标图像与至少一幅参考图像的差异,并基于该差异校正目标图像。关于参考图像获取的更多内容可以参考图3的步骤330及其相关描述。The correction module 230 can be used to correct the target image based on at least one reference image. In some embodiments, the correction module 230 may preprocess the target image first, and then correct the preprocessed target image based on at least one reference image. In some embodiments, the correction module 230 may determine a difference between the target image and at least one reference image, and correct the target image based on the difference. For more information about reference image acquisition, refer to step 330 in FIG. 3 and related descriptions.
应当理解,图2所示的图像处理系统200及其模块可以利用各种方式实现,例如,通过硬件、软件或者软件和硬件的结合实现。本说明书的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。It should be understood that the image processing system 200 and its modules shown in FIG. 2 can be implemented in various ways, for example, implemented by hardware, software, or a combination of software and hardware. The system and its modules in this specification can not only be realized by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. , can also be realized by software executed by various types of processors, for example, and can also be realized by a combination of the above-mentioned hardware circuits and software (for example, firmware).
需要注意的是,以上对于图像处理系统200及其模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。It should be noted that the above description of the image processing system 200 and its modules is only for convenience of description, and does not limit this description to the scope of the illustrated embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to combine various modules arbitrarily, or form a subsystem to connect with other modules without departing from this principle.
图3是根据本说明书一些实施例所示的示例性图像处理方法的流程图。在一些实施例中,流程300可以由处理设备140或图像处理系统200执行。例如,流程300可以以程序或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器或图2所示的模块执行程序或指令时,可以实现流程300。在一些实施例中,流程300可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图3所示的操作的顺序并非限制性的。Fig. 3 is a flowchart of an exemplary image processing method according to some embodiments of this specification. In some embodiments, the process 300 may be executed by the processing device 140 or the image processing system 200 . For example, the process 300 may be stored in a storage device (for example, the storage device 150, the storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 300. In some embodiments, process 300 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below. In addition, the order of operations shown in FIG. 3 is not limiting.
步骤310,获取对象的目标图像。在一些实施例中,该步骤310可以由处理设备140或图像处理系统200(例如,目标图像获取模块210)执行。Step 310, acquiring a target image of the object. In some embodiments, this step 310 may be performed by the processing device 140 or the image processing system 200 (eg, the target image acquisition module 210 ).
在一些实施例中,对象可以包括生物对象和/或非生物对象。例如,对象可以是有生命或无生命的有机和/或无机物质。又例如,对象可以包括患者的特定部分、器官和/或组织。仅作为示例,对象可以包括患者的大脑、颈部、心脏、肺等或其任意组合。In some embodiments, objects may include biological objects and/or non-biological objects. For example, an object may be animate or inanimate organic and/or inorganic matter. As another example, an object may include a specific part, organ, and/or tissue of a patient. By way of example only, an object may include a patient's brain, neck, heart, lungs, etc., or any combination thereof.
目标图像可以目标特征满足目标要求的图像。例如,目标图像可以包括软组织分辨精度较高的MR图像。又例如,目标图像可以包括空间位置精度较高的CT图像。目标特征或目标要求与图像或诊疗的实际需求相关,本说明书不做限制。The target image can be an image whose target features meet the target requirements. For example, the target image may include an MR image with high soft tissue resolution accuracy. For another example, the target image may include a CT image with high spatial position accuracy. The target features or target requirements are related to the actual needs of images or diagnosis and treatment, and are not limited in this specification.
在一些实施例中,目标图像可以包括二维(2D)图像、三维(3D)图像、四维(4D)图像(例如,3D图像的时间序列)等或其任意组合。在一些实施例中,目标图像可以包括CT图像、MR图像、PET图像、SPECT图像、超声图像、X射线图像等或其任意组合。In some embodiments, the target image may include a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D) image (eg, a time sequence of 3D images), etc., or any combination thereof. In some embodiments, the target images may include CT images, MR images, PET images, SPECT images, ultrasound images, X-ray images, etc. or any combination thereof.
在一些实施例中,目标图像获取模块210可以从成像设备(例如,成像设备110)获取对象的成像数据,并基于成像数据确定目标图像。在一些实施例中,目标图像获取模块210还可以对基于原始成像数据确定的原始图像进行校正(例如,随机校正、检测器归一化、散射校正、衰减校正等)或预处理操作(例如,尺寸调整、图像重采样、图像归一化等),以确定目标图像。在一些实施例中,目标图像获取模块210可以通过图像重建算法,基于成像数据确定目标图像。以MR图像为例,示例性MR图像重建算法可以包括傅立叶变换算法、反投影算法(例如,卷积反投影算法或滤波反投影算法)、迭代重建算法等。在一些实施例中,目标图像获取模块210可以从存储设备(例如,存储设备150)或外部存储设备(例如,医学图像数据库)中获取预先存储的目标图像。In some embodiments, the target image acquisition module 210 may acquire imaging data of the object from an imaging device (for example, the imaging device 110 ), and determine the target image based on the imaging data. In some embodiments, the target image acquisition module 210 can also perform corrections (for example, random correction, detector normalization, scatter correction, attenuation correction, etc.) or preprocessing operations (for example, resizing, image resampling, image normalization, etc.) to determine the target image. In some embodiments, the target image acquisition module 210 may determine the target image based on the imaging data through an image reconstruction algorithm. Taking MR images as an example, exemplary MR image reconstruction algorithms may include Fourier transform algorithms, back-projection algorithms (for example, convolution back-projection algorithms or filter back-projection algorithms), iterative reconstruction algorithms, and the like. In some embodiments, the target image acquisition module 210 can acquire pre-stored target images from a storage device (eg, the storage device 150 ) or an external storage device (eg, a medical image database).
步骤320,获取对象的至少一幅参考图像。在一些实施例中,该步骤320可以由处理设备140或图像处理系统200(例如,参考图像获取模块220)执行。 Step 320, acquiring at least one reference image of the object. In some embodiments, this step 320 may be performed by the processing device 140 or the image processing system 200 (eg, the reference image acquisition module 220 ).
参考图像可以用于校正目标图像。在一些实施例中,参考图像的特征或特点至少部分不同于目标图像的对应特征或特点。在一些实施例中,参考图像的特征或特点至少部分优于目标图像的对应特征或特点。在一些实施例中,参考图像某些维度或方面的特征或特点与目 标图像互补。例如,目标图像具有特征A和特征B,其中,特征A(即目标特征)的精度较高,而特征B的精度较低。相应地,参考图像可以是特征B的精度较高的图像。又例如,目标图像具有特征A、特征B、特征C及特征D,其中,特征A(即目标特征)的精度较高,而特征B、特征C及特征D的精度较低。相应地,参考图像可以是特征B、特征C及特征D的精度均较高的图像;或参考图像可以包括多幅,其中每幅图像其特征B、特征C及特征D中的至少一种的精度较高。具体例如,目标图像可以包括软组织分辨精度较高而空间位置精度较低的磁共振(magnetic resonance imaging,MRI)图像,参考图像可以包括空间位置精度较高的计算机断层扫描(computed tomography,CT)图像、雷达传感器的扫描图像、X射线图像等或其任意组合。The reference image can be used to correct the target image. In some embodiments, features or characteristics of the reference image are at least partially different from corresponding features or characteristics of the target image. In some embodiments, a feature or characteristic of the reference image is at least partially superior to a corresponding feature or characteristic of the target image. In some embodiments, the reference image is complementary in some dimension or aspect to features or characteristics of the target image. For example, a target image has feature A and feature B, wherein feature A (ie, the target feature) has higher precision, and feature B has lower precision. Correspondingly, the reference image may be an image with higher precision of the feature B. For another example, the target image has feature A, feature B, feature C, and feature D, wherein feature A (namely, the target feature) has higher accuracy, while feature B, feature C, and feature D have lower accuracy. Correspondingly, the reference image can be an image with high accuracy of feature B, feature C, and feature D; Higher precision. Specifically, for example, the target image may include a magnetic resonance imaging (MRI) image with high soft tissue resolution accuracy and low spatial position accuracy, and the reference image may include a computed tomography (CT) image with high spatial position accuracy. , scanned images of radar sensors, X-ray images, etc. or any combination thereof.
在一些实施例中,参考图像的获取方式与目标图像的获取方式类似,此处不再赘述。In some embodiments, the acquisition manner of the reference image is similar to the acquisition manner of the target image, which will not be repeated here.
在一些实施例中,参考图像对应的成像设备可以不同于目标图像对应的成像设备。例如,目标图像对应的成像设备可以是MRI设备,而参考图像对应的成像设备可以是超声波设备、X射线设备、CT设备、PET设备、OCT设备、IVUS设备、NIRS设备、FIR设备等或其任意组合。在一些实施例中,参考图像和目标图像可以分别对应成像设备的不同模态。例如,成像设备可以是MRI-CT设备,目标图像可以对应成像设备的MRI模态,而参考图像可以对应成像设备的CT模态。In some embodiments, the imaging device corresponding to the reference image may be different from the imaging device corresponding to the target image. For example, the imaging device corresponding to the target image may be an MRI device, and the imaging device corresponding to the reference image may be an ultrasound device, X-ray device, CT device, PET device, OCT device, IVUS device, NIRS device, FIR device, etc. or any combination. In some embodiments, the reference image and the target image may respectively correspond to different modalities of the imaging device. For example, the imaging device may be an MRI-CT device, the target image may correspond to the MRI modality of the imaging device, and the reference image may correspond to the CT modality of the imaging device.
在一些实施例中,至少一幅参考图像可以包括多幅参考图像。在一些实施例中,多幅参考图像可以由不同的成像设备采集。相应地,多幅参考图像可以具有不同维度或不同方面的特征。例如,多幅参考图像可以包括第一参考图像和第二参考图像,其中,第一参考图像可以由CT设备采集,第二参考图像可以由PET设备采集。在一些实施例中,多幅参考图像可以由相同的成像设备采集。在一些实施例中,多幅参考图像可以由相同的成像设备基于不同的成像条件(例如,成像角度、采样频率、射线强度、磁场强度等)采集。例如,多幅参考图像可以均由CT设备采集,但其对应的成像角度不同。In some embodiments, at least one reference image may include multiple reference images. In some embodiments, multiple reference images may be acquired by different imaging devices. Correspondingly, multiple reference images may have features of different dimensions or aspects. For example, the multiple reference images may include a first reference image and a second reference image, wherein the first reference image may be acquired by a CT device, and the second reference image may be acquired by a PET device. In some embodiments, multiple reference images may be acquired by the same imaging device. In some embodiments, multiple reference images may be collected by the same imaging device based on different imaging conditions (eg, imaging angle, sampling frequency, ray intensity, magnetic field intensity, etc.). For example, multiple reference images may all be collected by CT equipment, but their corresponding imaging angles are different.
通过不同的成像设备或不同的成像条件,可以采集对应不同维度或不同方面的参考图像,相应地,可以从不同维度或不同方面校正目标图像,提升多维度的校正效果。Through different imaging devices or different imaging conditions, reference images corresponding to different dimensions or different aspects can be collected, and correspondingly, target images can be corrected from different dimensions or different aspects to improve the multi-dimensional correction effect.
步骤330,基于至少一幅参考图像,校正目标图像。在一些实施例中,该步骤330可以由处理设备140或图像处理系统200(例如,校正模块230)执行。Step 330, based on at least one reference image, correct the target image. In some embodiments, this step 330 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
在一些实施例中,处理设备140可以先预处理目标图像,再基于至少一幅参考图像,校正预处理后的目标图像。In some embodiments, the processing device 140 may preprocess the target image first, and then correct the preprocessed target image based on at least one reference image.
通过对目标图像预处理,可以预先降低或消除部分可能影响图像质量的因素,从而提升后续校正效果。在一些实施例中,以目标图像为MRI图像为例,可以通过预处理降低或消 除主磁场(B0场)不均匀的影响。例如,处理设备140可以通过补偿算法预处理目标图像。示例性补偿算法可以包括3D相位去卷绕方法(例如,路径导向法、最小p范数法等)、2D相位去卷绕方法等。又例如,处理设备140可以预先得到B0场校正矩阵,并且通过B0场校正矩阵预处理目标图像。By preprocessing the target image, some factors that may affect the image quality can be reduced or eliminated in advance, thereby improving the subsequent correction effect. In some embodiments, taking the target image as an example of an MRI image, the influence of inhomogeneity of the main magnetic field (B0 field) may be reduced or eliminated through preprocessing. For example, the processing device 140 may preprocess the target image through a compensation algorithm. Exemplary compensation algorithms may include 3D phase unwarping methods (eg, path-guided methods, least-p-norm methods, etc.), 2D phase unwarping methods, and the like. For another example, the processing device 140 may obtain a B0 field correction matrix in advance, and preprocess the target image by using the B0 field correction matrix.
通常来说,对于不同的对象或不同的成像参数或序列参数条件下,B0场不均匀产生的影响不同。因此,通过预处理方式预先降低或消除B0场的影响,可以提升后续图像的校正效果。Generally speaking, for different objects or under different imaging parameters or sequence parameters, the impact of B0 field inhomogeneity is different. Therefore, reducing or eliminating the influence of the B0 field in advance through preprocessing can improve the correction effect of subsequent images.
在一些实施例中,预处理还可以包括调整分辨率、调整对比度、调整信噪比、空间域处理(例如,平滑、边缘检测、锐化等)等或其任意组合。In some embodiments, preprocessing may also include adjusting resolution, adjusting contrast, adjusting signal-to-noise ratio, spatial domain processing (eg, smoothing, edge detection, sharpening, etc.), etc., or any combination thereof.
在一些实施例中,处理设备140可以确定目标图像与至少一幅参考图像的差异,并基于该差异校正目标图像。In some embodiments, processing device 140 may determine a difference between the target image and at least one reference image, and correct the target image based on the difference.
在一些实施例中,差异可以体现目标图像与至少一幅参考图像在各种维度或各种方面(例如,空间位置、灰度值、梯度值、分辨率、亮度等)的区别。在一些实施例中,差异可以体现目标图像与至少一幅参考图像在除目标特征外的其他特征角度的区别。例如,目标图像具有特征A和特征B,其中,特征A的精度较高,而特征B的精度较低。参考图像可以是特征B的精度较高的图像。相应地,差异可以是目标图像的特征B与参考图像的特征B之间的差异。一些实施例中,差异可以通过数值、向量、矩阵、模型、图像等方式体现。In some embodiments, the difference may reflect the difference between the target image and at least one reference image in various dimensions or aspects (eg, spatial position, gray value, gradient value, resolution, brightness, etc.). In some embodiments, the difference may reflect the difference between the target image and the at least one reference image in angles of features other than target features. For example, a target image has a feature A and a feature B, wherein feature A has a higher accuracy and feature B has a lower accuracy. The reference image may be an image with higher accuracy of the feature B. Accordingly, the difference may be the difference between the feature B of the target image and the feature B of the reference image. In some embodiments, the difference can be represented by numerical values, vectors, matrices, models, images, and the like.
在一些实施例中,差异可以包括对象的同一位置点在目标图像和至少一幅参考图像中的空间位置差异。例如,处理设备140可以将目标图像和至少一幅参考图像映射至相同的空间坐标系,并确定对象的同一位置点在目标图像和至少一幅参考图像中的空间坐标差异(例如,空间坐标间的差值)。In some embodiments, the difference may include a spatial position difference of the same location point of the object in the target image and at least one reference image. For example, the processing device 140 may map the target image and the at least one reference image to the same spatial coordinate system, and determine the difference in the spatial coordinates of the same position point of the object in the target image and the at least one reference image (for example, between the spatial coordinates difference).
仅作为示例,图4A是根据本说明书一些实施例所示的示例性参考图像的示意图,图4B是根据本说明书一些实施例所示的示例性目标图像的示意图,图4A中的圆点与图4B中的圆点一一对应,彼此对应的圆点对应对象的同一位置点,彼此对应的圆点间的坐标的差值即为目标图像和参考图像间的差异。As an example only, FIG. 4A is a schematic diagram of an exemplary reference image shown according to some embodiments of this specification, and FIG. 4B is a schematic diagram of an exemplary target image shown according to some embodiments of this specification. The dots in FIG. The dots in 4B are in one-to-one correspondence, the dots corresponding to each other correspond to the same position of the object, and the difference in coordinates between the dots corresponding to each other is the difference between the target image and the reference image.
在一些实施例中,处理设备140可以将至少一幅参考图像和目标图像进行配准,并基于至少一幅参考图像和目标图像的第一配准结果,确定至少一幅参考图像和目标图像的差异。例如,将至少一幅参考图像和目标图像进行配准后,处理设备140可以直接对配准后的至少一幅参考图像和目标图像进行相减处理,以确定二者间的差异。又例如,将至少一幅参考图像和目标图像进行配准后,处理设备140可以直接对配准后的至少一幅参考图像和目标图像进行相除处理,以确定二者间的差异。在一些实施例中,示例性的图像配准算法可以包括平 均绝对差算法(MAD)、绝对误差和算法(SAD)、误差平方和算法(SSD)、平均误差平方和算法(MSD)、归一化积相关算法(NCC)、序贯相似性检测算法(SSDA)、局部灰度值编码算法等或其任意组合。In some embodiments, the processing device 140 can register the at least one reference image and the target image, and determine the at least one reference image and the target image based on the first registration result of the at least one reference image and the target image. difference. For example, after registering the at least one reference image and the target image, the processing device 140 may directly perform subtraction processing on the registered at least one reference image and the target image, so as to determine the difference between the two. For another example, after registering at least one reference image and the target image, the processing device 140 may directly perform division processing on the registered at least one reference image and the target image, so as to determine the difference between the two. In some embodiments, exemplary image registration algorithms may include mean absolute difference (MAD), sum of absolute error (SAD), sum of squared error (SSD), sum of squared error (MSD), normalization Product correlation algorithm (NCC), sequential similarity detection algorithm (SSDA), local gray value coding algorithm, etc. or any combination thereof.
在一些实施例中,处理设备140可以将至少一幅参考图像和目标图像输入第一模型,并基于第一模型的输出,确定至少一幅参考图像和目标图像的差异。关于第一模型的更多内容参见图5A及其相关描述。In some embodiments, the processing device 140 may input the at least one reference image and the target image into the first model, and based on the output of the first model, determine the difference of the at least one reference image and the target image. For more information about the first model, please refer to FIG. 5A and its related description.
在一些实施例中,对于对象的多个位置点中的每一个,处理设备140可以确定该位置点在目标图像和至少一幅参考图像中的点差异(例如,空间位置差异、灰度值差异、梯度值差异、分辨率差异、亮度差异等),并基于多个位置点分别对应的多个点差异,构建差异模型,该差异模型可以体现至少一幅参考图像和目标图像的差异。关于基于差异模型的更多内容参见图5B及其相关描述。In some embodiments, for each of the plurality of location points of the object, the processing device 140 may determine the point difference (e.g., spatial location difference, gray value difference) of the location point in the target image and at least one reference image. , gradient value difference, resolution difference, brightness difference, etc.), and based on the multiple point differences corresponding to the multiple position points, a difference model is constructed, and the difference model can reflect the difference between at least one reference image and the target image. See Figure 5B and its associated description for more on the difference-based model.
在一些实施例中,处理设备140可以通过差异调整目标图像的对应特征,以校正目标图像。例如,结合上文,目标图像具有特征A和特征B,其中,特征A的精度较高,而特征B的精度较低。参考图像可以是特征B的精度较高的图像。差异可以是目标图像的特征B与参考图像的特征B之间的差异。相应地,处理设备140可以基于差异调整目标图像的特征B,使其接近参考图像的特征B。仅作为示例,图4C是根据本说明书一些实施例所示的示例性校正过程的示意图,图4C中的箭头表示对象的多个位置点在至少一幅参考图像和目标图像之间的坐标差异,相应地,处理设备140可以基于该坐标差异,调整目标图像中各个点的坐标值,从而提升其空间位置精度。In some embodiments, the processing device 140 may adjust the corresponding features of the target image by difference to correct the target image. For example, in combination with the above, the target image has feature A and feature B, wherein feature A has higher precision, and feature B has lower precision. The reference image may be an image with higher accuracy of the feature B. The difference may be the difference between feature B of the target image and feature B of the reference image. Correspondingly, the processing device 140 can adjust the feature B of the target image based on the difference so that it is close to the feature B of the reference image. As an example only, FIG. 4C is a schematic diagram of an exemplary correction process according to some embodiments of the present specification. The arrows in FIG. 4C represent the coordinate differences of multiple position points of the object between at least one reference image and the target image, Correspondingly, the processing device 140 can adjust the coordinate values of each point in the target image based on the coordinate difference, thereby improving the accuracy of its spatial position.
在一些实施例中,处理设备140可以将差异和目标图像输入机器学习模型,并基于机器学习模型的输出,确定校正后的目标图像。In some embodiments, the processing device 140 may input the difference and the target image into the machine learning model, and based on the output of the machine learning model, determine the corrected target image.
在一些实施例中,至少一幅参考图像可以包括多幅参考图像。处理设备140可以确定目标图像与多幅参考图像中的每一幅之间的差异。处理设备140可以对多幅参考图像分别对应的差异进行综合处理。例如,处理设备140可以对多幅参考图像分别对应的差异进行加权处理。进一步地,处理设备140可以基于综合处理结果,校正目标图像。关于综合处理多幅参考图像分别对应的差异的更多内容参见图6A、图6B及其相关描述。In some embodiments, at least one reference image may include multiple reference images. Processing device 140 may determine differences between the target image and each of the plurality of reference images. The processing device 140 may perform comprehensive processing on the differences respectively corresponding to the multiple reference images. For example, the processing device 140 may perform weighting processing on differences respectively corresponding to multiple reference images. Further, the processing device 140 may correct the target image based on the comprehensive processing result. For more information about comprehensively processing the differences corresponding to multiple reference images, please refer to FIG. 6A, FIG. 6B and their related descriptions.
在一些实施例中,处理设备140可以确定目标图像与多幅参考图像中的每一幅之间的差异。处理设备140可以基于差异,校正目标图像以确定中间校正图像。进一步地,处理设备140可以基于多幅中间校正图像,确定校正后的目标图像。关于确定中间校正图像的更多内容参见图7A、图7B及其相关描述。In some embodiments, processing device 140 may determine differences between the target image and each of the plurality of reference images. The processing device 140 may correct the target image to determine an intermediate corrected image based on the difference. Further, the processing device 140 may determine the corrected target image based on the multiple intermediate corrected images. For more information on determining the intermediate corrected image, refer to FIG. 7A, FIG. 7B and their related descriptions.
在一些实施例中,处理设备140可以将至少一幅参考图像和目标图像进行配准,并基 于至少一幅参考图像和目标图像的第二配准结果,确定校正后的目标图像。例如,将至少一幅参考图像和目标图像进行配准后,处理设备140可以直接对配准后的至少一幅参考图像和目标图像进行相减处理,以确定校正后的目标图像。又例如,将至少一幅参考图像和目标图像进行配准后,处理设备140可以直接对配准后的至少一幅参考图像和目标图像进行相除处理,以确定校正后的目标图像。In some embodiments, the processing device 140 can register the at least one reference image and the target image, and determine the corrected target image based on the second registration result of the at least one reference image and the target image. For example, after registering the at least one reference image and the target image, the processing device 140 may directly perform subtraction processing on the registered at least one reference image and the target image, so as to determine the corrected target image. For another example, after registering the at least one reference image and the target image, the processing device 140 may directly perform division processing on the registered at least one reference image and the target image, so as to determine the corrected target image.
在一些实施例中,处理设备140可以将至少一幅参考图像和目标图像输入第二模型。并基于第二模型的输出,确定校正后的目标图像。关于第二模型的更多内容参见图8A、图8B及其相关描述。In some embodiments, the processing device 140 may input at least one reference image and the target image into the second model. And based on the output of the second model, the corrected target image is determined. For more information about the second model, refer to FIG. 8A, FIG. 8B and their related descriptions.
根据本说明书实施例,可以基于对象的至少一幅参考图像和目标图像间的差异,校正目标图像,其中,参考图像的特征或特点至少部分优于目标图像的对应特征或特点。相应地,可以在保留目标图像的目标特征的同时,调整目标图像的其他特征接近参考图像的相应特征,从而丰富目标图像的特征,提高目标图像的图像质量,进而提供综合的图像信息,提升后续可能的诊疗效果。According to an embodiment of the present specification, the target image may be corrected based on a difference between at least one reference image of the object and the target image, wherein features or characteristics of the reference image are at least partially better than corresponding features or characteristics of the target image. Correspondingly, while retaining the target features of the target image, other features of the target image can be adjusted to be close to the corresponding features of the reference image, thereby enriching the features of the target image, improving the image quality of the target image, and providing comprehensive image information to improve subsequent possible therapeutic effects.
应当注意的是,上述有关流程300的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程300进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。It should be noted that the above description about the process 300 is only for illustration and description, and does not limit the scope of application of this description. For those skilled in the art, various modifications and changes can be made to the process 300 under the guidance of this description. However, such modifications and changes are still within the scope of this specification.
图5A是根据本说明书一些实施例所示的确定目标图像与参考图像间差异的示例性方法的示意图。Fig. 5A is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification.
如图5A所示,在一些实施例中,可以将目标图像510和至少一幅参考图像515输入第一模型520,并基于第一模型520的输出,确定目标图像和至少一幅参考图像的差异530。As shown in FIG. 5A, in some embodiments, a target image 510 and at least one reference image 515 may be input into a first model 520, and based on the output of the first model 520, the difference between the target image and at least one reference image may be determined 530.
在一些实施例中,第一模型520可以是卷积神经网络模型(Convolutional Neural Network,CNN)、深度神经网络模型(Deep Neural Network,DNN)、循环神经网络模型(Recurrent Neural Network,RNN)、图神经网络模型(Graph Neural Network,GNN)、生成对抗网络模型(Generative Adversarial Network,GAN)等或其任意组合。In some embodiments, the first model 520 may be a convolutional neural network model (Convolutional Neural Network, CNN), a deep neural network model (Deep Neural Network, DNN), a recurrent neural network model (Recurrent Neural Network, RNN), a graph Neural network model (Graph Neural Network, GNN), Generative Adversarial Network model (Generative Adversarial Network, GAN), etc. or any combination thereof.
在一些实施例中,第一模型520可以基于多组第一训练样本540训练确定。每组第一训练样本540可以包括样本对象的样本目标图像541、样本对象的至少一幅样本参考图像542和对应的样本差异543,其中,样本目标图像541和至少一幅样本参考图像542为训练数据,对应的样本差异543为标签(label)。In some embodiments, the first model 520 may be determined based on multiple sets of first training samples 540 through training. Each set of first training samples 540 may include a sample target image 541 of the sample object, at least one sample reference image 542 of the sample object, and a corresponding sample difference 543, wherein the sample target image 541 and at least one sample reference image 542 are training data, and the corresponding sample difference 543 is a label.
在一些实施例中,样本目标图像541和至少一幅样本参考图像542对应不同的成像设备。在一些实施例中,至少一幅样本参考图像542对应不同的成像设备。样本目标图像541和样本参考图像542的相互关系与目标图像与参考图像的相互关系类似,更具体的描述可见 图3。In some embodiments, the sample target image 541 and at least one sample reference image 542 correspond to different imaging devices. In some embodiments, at least one sample reference image 542 corresponds to a different imaging device. The relationship between the sample target image 541 and the sample reference image 542 is similar to the relationship between the target image and the reference image, and a more specific description can be found in FIG. 3 .
在一些实施例中,样本目标图像541和至少一幅样本参考图像542间的样本差异543可以由用户手动标记,也可以由图像处理系统100自动标记。In some embodiments, the sample difference 543 between the sample target image 541 and at least one sample reference image 542 can be marked manually by a user, or automatically marked by the image processing system 100 .
在一些实施例中,处理设备140(或其他处理设备)将样本目标图像541和至少一幅样本参考图像542作为输入,将对应的样本差异543作为监督,对第一模型520进行训练,通过机器学习算法(例如,随机梯度下降法)更新第一模型520的参数,以最小化第一损失函数,直到模型训练完成;或迭代训练次数达到一定次数后则停止训练。In some embodiments, the processing device 140 (or other processing device) takes the sample target image 541 and at least one sample reference image 542 as input, and uses the corresponding sample difference 543 as supervision to train the first model 520, by machine The learning algorithm (for example, stochastic gradient descent method) updates the parameters of the first model 520 to minimize the first loss function until the model training is completed; or the training is stopped after the number of iteration training reaches a certain number of times.
在一些实施例中,第一损失函数可以是感知损失函数。在一些实施例中,第一损失函数还可以是其他损失函数,例如,平方损失函数、逻辑回归损失函数等。In some embodiments, the first loss function may be a perceptual loss function. In some embodiments, the first loss function may also be other loss functions, for example, a square loss function, a logistic regression loss function, and the like.
图5B是根据本说明书一些实施例所示的确定目标图像与参考图像间差异的示例性方法的示意图。Fig. 5B is a schematic diagram of an exemplary method for determining the difference between a target image and a reference image according to some embodiments of the present specification.
如图5B所示,在一些实施例中,对于对象的多个位置点的中每一个,处理设备140可以确定该位置点在目标图像560和至少一幅参考图像565中的点差异570,并基于多个位置点分别对应的多个点差异,通过构建差异模型580以确定目标图像560和至少一幅参考图像565的差异590。As shown in FIG. 5B, in some embodiments, for each of the plurality of location points of the object, the processing device 140 may determine the point difference 570 of the location point in the target image 560 and at least one reference image 565, and Based on the multiple point differences corresponding to the multiple position points, the difference 590 between the target image 560 and at least one reference image 565 is determined by constructing a difference model 580 .
在一些实施例中,点差异可以包括空间位置差异、灰度值差异、梯度值差异、分辨率差异、亮度差异等或其任意组合。例如,对于空间位置差异,处理设备140可以将目标图像和至少一幅参考图像映射至相同的空间坐标系,并确定对象的同一位置点在目标图像和至少一幅参考图像中的空间坐标差异(例如,空间坐标间的差值)。具体例如,以对象的某个特定位置点为例,该位置点在目标图像560中的坐标为(x a,y a,z a),在至少一幅参考图像565中的坐标为(x b,y b,z b),那么该位置点在目标图像560和至少一幅参考图像565中的空间位置差异d可以基于下述公式(1)或公式(2)确定: In some embodiments, point differences may include spatial position differences, gray value differences, gradient value differences, resolution differences, brightness differences, etc. or any combination thereof. For example, for the spatial position difference, the processing device 140 may map the target image and at least one reference image to the same spatial coordinate system, and determine the spatial coordinate difference ( For example, the difference between spatial coordinates). Specifically, for example, taking a specific position point of an object as an example, the coordinates of the position point in the target image 560 are (x a , y a , z a ), and the coordinates of the position point in at least one reference image 565 are (x b , y b , z b ), then the spatial position difference d of the position point in the target image 560 and at least one reference image 565 can be determined based on the following formula (1) or formula (2):
d=((x a-x b),(y a-y b),(z a-z b));     (1) d=((x a -x b ),(y a -y b ),(z a -z b )); (1)
d=((x b-x a),(y b-y a),(y b-y a))      (2) d=((x b -x a ),(y b -y a ),(y b -y a )) (2)
在一些实施例中,处理设备140可以基于多个位置点分别对应的多个点差异构建差异模型580,从而确定至少一幅参考图像和目标图像的差异590。在一些实施例中,处理设备140可以通过数据建模方法构建差异模型590。在一些实施例中,处理设备140可以通过三维建模方法构建差异模型590。在一些实施例中,处理设备140可以通过主成分分析、模拟仿真、回归分析、聚类分析等方法构建差异模型590。在一些实施例中,差异590可以通过数值、向量、矩阵、模型、图像等方式体现。In some embodiments, the processing device 140 may construct a difference model 580 based on a plurality of point differences respectively corresponding to a plurality of position points, so as to determine a difference 590 between at least one reference image and the target image. In some embodiments, the processing device 140 may construct the difference model 590 through a data modeling method. In some embodiments, the processing device 140 may construct the difference model 590 through a three-dimensional modeling method. In some embodiments, the processing device 140 can construct the difference model 590 by principal component analysis, simulation, regression analysis, cluster analysis and other methods. In some embodiments, the difference 590 may be represented by numerical values, vectors, matrices, models, images, and the like.
以空间位置差异为例,在确定多个位置点中分别对应的空间位置差异(例如,空间坐 标差异)后,处理设备140可以通过插值方式确定相邻位置点间的中间位置点对应的空间位置差异,从而确定大量位置点分别对应的空间位置差异。处理设备140可以基于大量空间位置差异,构建差异模型580(例如,如图4C所示的矢量图),差异模型580可以从整体上体现至少一幅参考图像和目标图像的空间位置差异。Taking the spatial position difference as an example, after determining the corresponding spatial position differences (for example, spatial coordinate differences) among the multiple position points, the processing device 140 can determine the spatial position corresponding to the intermediate position point between adjacent position points by interpolation. difference, so as to determine the spatial position difference corresponding to a large number of position points. The processing device 140 can build a difference model 580 (for example, a vector diagram as shown in FIG. 4C ) based on a large number of spatial position differences, and the difference model 580 can reflect the spatial position difference between at least one reference image and the target image as a whole.
图6A是根据本说明书一些实施例所示的示例性图像处理方法的流程图。在一些实施例中,流程600可以由处理设备140或图像处理系统200执行。例如,流程600可以以程序或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器或图2所示的模块执行程序或指令时,可以实现流程600。在一些实施例中,流程600可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图6A所示的操作的顺序并非限制性的。在一些实施例中,图3中的操作330描述的校正目标图像可以根据流程600进行。Fig. 6A is a flowchart of an exemplary image processing method according to some embodiments of the present specification. In some embodiments, the process 600 may be executed by the processing device 140 or the image processing system 200 . For example, the process 600 may be stored in a storage device (for example, the storage device 150, the storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 600. In some embodiments, process 600 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below. In addition, the order of operations shown in FIG. 6A is not limiting. In some embodiments, correcting the target image described in operation 330 in FIG. 3 may be performed according to process 600 .
步骤610,确定目标图像与多幅参考图像中的每一幅之间的差异(如图6B所示的差异660、差异665等)。在一些实施例中,该步骤610可以由处理设备140或图像处理系统200(例如,校正模块230)执行。 Step 610, determine the difference between the target image and each of the plurality of reference images (difference 660, difference 665, etc. as shown in FIG. 6B). In some embodiments, this step 610 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
差异可以体现目标图像与多幅参考图像中的每一幅在各种维度或各种方面的区别(例如,空间位置、灰度值、梯度值、分辨率、亮度等)。在一些实施例中,处理设备140可以通过第一模型或者差异模型确定目标图像与多幅参考图像中的每一幅之间的差异。关于确定差异的更多内容参见图5A、图5B及其相关描述。The difference may reflect the difference between the target image and each of the multiple reference images in various dimensions or aspects (for example, spatial position, gray value, gradient value, resolution, brightness, etc.). In some embodiments, the processing device 140 may determine the difference between the target image and each of the plurality of reference images through the first model or the difference model. For more information on determining the difference, refer to FIG. 5A, FIG. 5B and their related descriptions.
步骤620,对多幅参考图像分别对应的差异进行综合处理。在一些实施例中,该步骤620可以由处理设备140或图像处理系统200(例如,校正模块230)执行。 Step 620, perform comprehensive processing on the differences corresponding to the multiple reference images. In some embodiments, this step 620 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
在一些实施例中,处理设备140可以对多幅参考图像分别对应的差异进行加权处理。例如,处理设备140可以对多幅参考图像分别对应的差异赋予不同的权重,并确定多幅参考图像对应的整体差异。在一些实施例中,每一幅参考图像对应的差异的权重可以由用户确定或由图像处理系统100根据图像处理要求确定。例如,目标图像具有特征A、特征B、特征C及特征D,其中,特征A(即目标特征)的精度较高,而特征B、特征C及特征D的精度较低。参考图像可以包括三幅,分别具有较高精度的特征B、特征C及特征D。相应地,每幅参考图像与目标图像的差异可以包括特征B差异、特征C差异和特征C差异。假设最终的图像处理目标是优先校正特征B、其次特征C、再其次特征D,相应地,三幅参考图像分别对应的差异的权重则依次降低。In some embodiments, the processing device 140 may perform weighting processing on the differences respectively corresponding to the multiple reference images. For example, the processing device 140 may assign different weights to the differences corresponding to the multiple reference images, and determine the overall difference corresponding to the multiple reference images. In some embodiments, the weight of the difference corresponding to each reference image may be determined by the user or determined by the image processing system 100 according to image processing requirements. For example, the target image has feature A, feature B, feature C, and feature D, wherein, feature A (namely, the target feature) has higher accuracy, while feature B, feature C, and feature D have lower accuracy. The reference image may include three, feature B, feature C, and feature D with relatively high precision. Correspondingly, the difference between each reference image and the target image may include a feature B difference, a feature C difference, and a feature C difference. Assuming that the final image processing goal is to correct feature B first, then feature C, and then feature D, correspondingly, the weights of the differences corresponding to the three reference images are sequentially reduced.
仅作为示例,处理设备140可以基于下述公式(3)确定多幅参考图像对应的整体差异:As an example only, the processing device 140 may determine the overall difference corresponding to multiple reference images based on the following formula (3):
Figure PCTCN2021136183-appb-000001
Figure PCTCN2021136183-appb-000001
其中,d t表示多幅参考图像对应的整体差异,r i表示第i幅参考图像对应的差异的权重,d i表示第i幅参考图像对应的差异,n表示多幅参考图像的总数量,n为正整数。 Among them, d t represents the overall difference corresponding to multiple reference images, r i represents the weight of the difference corresponding to the i-th reference image, d i represents the difference corresponding to the i-th reference image, n represents the total number of multiple reference images, n is a positive integer.
在一些实施例中,处理设备140可以对多幅参考图像分别对应的差异进行平均处理或加权平均处理。例如,多幅参考图像可以对应相同的成像设备或对应相同的成像条件,但由于不可避免的系统误差,多幅参考图像间不可避免存在一定的不同或偏差。相应地,通过对其分别对应的差异进行平均处理或加权平均处理,可以均衡误差,从而提升后续校正效果。In some embodiments, the processing device 140 may perform average processing or weighted average processing on differences respectively corresponding to multiple reference images. For example, multiple reference images may correspond to the same imaging device or to the same imaging condition, but due to inevitable systematic errors, certain differences or deviations inevitably exist among the multiple reference images. Correspondingly, by performing averaging processing or weighted average processing on the corresponding differences, the errors can be balanced, thereby improving the subsequent correction effect.
步骤630,基于综合处理结果(如图6B所示的综合处理结果670),确定校正后的目标图像(如图6B所示的校正后的目标图像680)。在一些实施例中,该步骤630可以由处理设备140或图像处理系统200(例如,校正模块230)执行。 Step 630, based on the comprehensive processing result (such as the comprehensive processing result 670 shown in FIG. 6B), determine the corrected target image (the corrected target image 680 shown in FIG. 6B). In some embodiments, this step 630 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
在一些实施例中,处理设备140可以通过综合处理结果,综合调整目标图像的对应特征,使其接近参考图像的对应特征,以校正目标图像。在一些实施例中,综合处理结果可以以数值、向量、矩阵、模型、图像等方式体现。例如,处理设备140可以基于综合处理结果所对应的矩阵,调整目标图像各个像素点的像素值或相关信息,以校正目标图像。In some embodiments, the processing device 140 can comprehensively adjust the corresponding features of the target image to make it close to the corresponding features of the reference image by integrating the processing results, so as to correct the target image. In some embodiments, the integrated processing results may be represented in the form of numerical values, vectors, matrices, models, images, and the like. For example, the processing device 140 may adjust pixel values or related information of each pixel of the target image based on the matrix corresponding to the comprehensive processing result, so as to correct the target image.
通过对多幅参考图像分别对应的差异进行综合处理,不仅可以考虑不同待校正特征的不同权重,还可以均衡可能的系统误差,从而提升图像校正效果。By comprehensively processing the differences corresponding to multiple reference images, not only can different weights of different features to be corrected be considered, but also possible system errors can be balanced, thereby improving the image correction effect.
图7A是根据本说明书一些实施例所示的示例性图像处理方法的流程图。在一些实施例中,流程700可以由处理设备140或图像处理系统200执行。例如,流程700可以以程序或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器或图2所示的模块执行程序或指令时,可以实现流程700。在一些实施例中,流程700可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图7A所示的操作的顺序并非限制性的。在一些实施例中,图3中的操作330描述的校正目标图像可以根据流程700进行。Fig. 7A is a flowchart of an exemplary image processing method according to some embodiments of the present specification. In some embodiments, the process 700 may be executed by the processing device 140 or the image processing system 200 . For example, the process 700 may be stored in a storage device (for example, storage device 150, a storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 700. In some embodiments, process 700 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below. In addition, the order of operations shown in FIG. 7A is not limiting. In some embodiments, correcting the target image described in operation 330 in FIG. 3 may be performed according to process 700 .
步骤710,确定目标图像与多幅参考图像中的每一幅之间的差异。在一些实施例中,该步骤710可以由处理设备140或图像处理系统200(例如,校正模块230)执行。 Step 710, determining the difference between the target image and each of the plurality of reference images. In some embodiments, this step 710 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
例如,如图7B所示,处理设备140可以确定目标图像750和参考图像760之间的差异770、目标图像755和参考图像765之间的差异775等。关于确定差异的更多内容参见图5A、图5B及其相关描述。For example, as shown in FIG. 7B , processing device 140 may determine difference 770 between target image 750 and reference image 760 , difference 775 between target image 755 and reference image 765 , and so on. For more information on determining the difference, refer to FIG. 5A, FIG. 5B and their related descriptions.
步骤720,基于差异,校正目标图像以确定中间校正图像。在一些实施例中,该步骤720可以由处理设备140或图像处理系统200(例如,校正模块230)执行。 Step 720, based on the difference, correct the target image to determine an intermediate corrected image. In some embodiments, this step 720 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
例如,如图7B所示,处理设备140可以基于差异770,校正目标图像750以确定中间校正图像780;处理设备140可以基于差异775,校正目标图像755以确定中间校正图像785 等。For example, as shown in FIG. 7B , processing device 140 may correct target image 750 based on difference 770 to determine intermediate corrected image 780; processing device 140 may correct target image 755 based on difference 775 to determine intermediate corrected image 785;
在一些实施例中,处理设备140可以通过差异调整目标图像的对应特征,使其接近参考图像的对应特征,以校正目标图像。关于基于差异校正目标图像的内容可见步骤330或步骤630。In some embodiments, the processing device 140 can adjust the corresponding features of the target image to be close to the corresponding features of the reference image through the difference, so as to correct the target image. Step 330 or step 630 can be seen for correcting the content of the target image based on the difference.
步骤730,基于多幅中间校正图像,确定校正后的目标图像。在一些实施例中,该步骤730可以由处理设备140或图像处理系统200(例如,校正模块230)执行。Step 730: Determine a corrected target image based on the multiple intermediate corrected images. In some embodiments, this step 730 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
例如,处理设备140可以基于中间校正图像780、中间校正图像785等,确定最终校正后的目标图像790。For example, processing device 140 may determine final corrected target image 790 based on intermediate corrected image 780, intermediate corrected image 785, and the like.
在一些实施例中,处理设备140可以对多幅中间校正图像进行加权处理。例如,处理设备140可以对多幅中间校正图像赋予不同的权重,并基于权重进行加权处理。在一些实施例中,每一幅中间校正图像的权重可以由用户确定或图像处理系统100根据图像处理要求确定。例如,目标图像具有特征A、特征B、特征C及特征D,其中,特征A(即目标特征)的精度较高,而特征B、特征C及特征D的精度较低。参考图像可以包括三幅,分别具有较高精度的特征B、特征C及特征D。相应地,基于每幅参考图像与目标图像的差异校正目标图像,分别得到中间校正图像B、中间校正图像C及中间校正图像D。假设最终的图像处理目标是优先校正特征B、其次特征C、再其次特征D,相应地,三幅中间校正图像分别对应的差异的权重则依次降低。In some embodiments, the processing device 140 may perform weighting processing on multiple intermediate corrected images. For example, the processing device 140 may assign different weights to the multiple intermediate corrected images, and perform weighting processing based on the weights. In some embodiments, the weight of each intermediate corrected image can be determined by the user or determined by the image processing system 100 according to image processing requirements. For example, the target image has feature A, feature B, feature C, and feature D, wherein, feature A (namely, the target feature) has higher accuracy, while feature B, feature C, and feature D have lower accuracy. The reference image may include three, feature B, feature C, and feature D with relatively high precision. Correspondingly, the target image is corrected based on the difference between each reference image and the target image, and an intermediate corrected image B, an intermediate corrected image C, and an intermediate corrected image D are respectively obtained. Assuming that the final image processing goal is to correct feature B first, then feature C, and then feature D, correspondingly, the weights of the differences corresponding to the three intermediate corrected images are sequentially reduced.
仅作为示例,处理设备140可以基于下述公式(4)确定最终校正后的目标图像:As an example only, the processing device 140 may determine the final corrected target image based on the following formula (4):
Figure PCTCN2021136183-appb-000002
Figure PCTCN2021136183-appb-000002
其中,I t表示最终校正后的目标图像,W i表示第i幅中间校正图像对应的权重,R i表示第i幅中间校正图像,n表示多幅参考图像(或中间校正图像)的总数量,n为正整数。 Among them, I t represents the final corrected target image, W i represents the weight corresponding to the i-th intermediate corrected image, R i represents the i-th intermediate corrected image, and n represents the total number of multiple reference images (or intermediate corrected images) , n is a positive integer.
在一些实施例中,处理设备140可以对多幅参考图像分别对应的多幅中间校正图像进行平均处理或加权平均处理。例如,多幅参考图像可以对应相同的成像设备或对应相同的成像条件,但由于不可避免的系统误差,多幅中间校正图像间不可避免存在一定的不同或偏差。相应地,通过对其分别对应的差异进行平均处理或加权平均处理,可以均衡误差,从而提升后续校正效果。In some embodiments, the processing device 140 may perform average processing or weighted average processing on the multiple intermediate corrected images respectively corresponding to the multiple reference images. For example, multiple reference images may correspond to the same imaging device or to the same imaging conditions, but due to unavoidable systematic errors, certain differences or deviations inevitably exist among the multiple intermediate corrected images. Correspondingly, by performing averaging processing or weighted average processing on the corresponding differences, the errors can be balanced, thereby improving the subsequent correction effect.
通过分别基于多幅参考图像进行校正,并对多幅中间校正图像进行后续综合处理以确定最终校正后的目标图像,可以减少单次校正的运算量,且考虑不同待校正特征的不同权重,还可以均衡可能的系统误差,从而提高图像校正效果。By performing correction based on multiple reference images respectively, and performing subsequent comprehensive processing on multiple intermediate corrected images to determine the final corrected target image, the calculation amount of a single correction can be reduced, and the different weights of different features to be corrected can be considered. Possible systematic errors can be equalized, thereby improving the image correction effect.
图8A是根据本说明书一些实施例所示的示例性图像处理方法的流程图。在一些实施例中,流程800可以由处理设备140或图像处理系统200执行。例如,流程800可以以程序 或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器或图2所示的模块执行程序或指令时,可以实现流程800。在一些实施例中,流程800可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图8A所示的操作的顺序并非限制性的。在一些实施例中,图3中的操作330描述的校正目标图像可以根据流程800进行。Fig. 8A is a flowchart of an exemplary image processing method according to some embodiments of the present specification. In some embodiments, the process 800 may be executed by the processing device 140 or the image processing system 200 . For example, the process 800 may be stored in a storage device (for example, the storage device 150, the storage unit of the processing device 140) in the form of a program or an instruction, and when the processor or the module shown in FIG. 2 executes the program or the instruction, the process may be implemented. 800. In some embodiments, process 800 may be accomplished with one or more additional operations not described below, and/or without one or more operations discussed below. In addition, the order of operations as shown in FIG. 8A is not limiting. In some embodiments, correcting the target image described in operation 330 in FIG. 3 may be performed according to process 800 .
步骤810,将至少一幅参考图像和目标图像输入第二模型。在一些实施例中,该步骤810可以由处理设备140或图像处理系统200(例如,校正模块230)执行。关于第二模型的更多内容参见图8B及其相关描述。 Step 810, input at least one reference image and target image into the second model. In some embodiments, this step 810 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ). For more information on the second model, refer to FIG. 8B and its related description.
步骤820,基于第二模型的输出,确定校正后的目标图像。在一些实施例中,该步骤820可以由处理设备140或图像处理系统200(例如,校正模块230)执行。 Step 820, based on the output of the second model, determine the corrected target image. In some embodiments, this step 820 may be performed by the processing device 140 or the image processing system 200 (eg, the correction module 230 ).
图8B是根据本说明书一些实施例所示的示例性图像处理方法的示意图。Fig. 8B is a schematic diagram of an exemplary image processing method according to some embodiments of the present specification.
如图8B所示,在一些实施例中,可以将目标图像860和至少一幅参考图像865输入第二模型870,得到校正后的目标图像880。As shown in FIG. 8B , in some embodiments, the target image 860 and at least one reference image 865 may be input into the second model 870 to obtain a corrected target image 880 .
在一些实施例中,第一模型870可以是卷积神经网络模型(Convolutional Neural Network,CNN)、深度神经网络模型(Deep Neural Network,DNN)、循环神经网络模型(Recurrent Neural Network,RNN)、图神经网络模型(Graph Neural Network,GNN)、生成对抗网络模型(Generative Adversarial Network,GAN)等或其任意组合。In some embodiments, the first model 870 may be a convolutional neural network model (Convolutional Neural Network, CNN), a deep neural network model (Deep Neural Network, DNN), a recurrent neural network model (Recurrent Neural Network, RNN), a graph Neural network model (Graph Neural Network, GNN), Generative Adversarial Network model (Generative Adversarial Network, GAN), etc. or any combination thereof.
在一些实施例中,第二模型870可以基于多组第二训练样本890训练确定。每组第二训练样本890可以包括样本对象的样本目标图像891、样本对象的至少一幅样本参考图像892和对应的样本校正图像893,其中,样本目标图像891和至少一幅样本参考图像892为训练数据,对应的样本校正图像893为标签(label)。In some embodiments, the second model 870 may be determined based on multiple sets of second training samples 890 for training. Each set of second training samples 890 may include a sample target image 891 of the sample object, at least one sample reference image 892 of the sample object and a corresponding sample corrected image 893, wherein the sample target image 891 and at least one sample reference image 892 are For the training data, the corresponding sample corrected image 893 is a label.
在一些实施例中,样本目标图像891和至少一幅样本参考图像892对应不同的成像设备。在一些实施例中,至少一幅样本参考图像892对应不同的成像设备。样本目标图像891和样本参考图像892的相互关系与目标图像与参考图像的相互关系类似,更具体的描述可见图3。In some embodiments, the sample target image 891 and the at least one sample reference image 892 correspond to different imaging devices. In some embodiments, at least one sample reference image 892 corresponds to a different imaging device. The relationship between the sample target image 891 and the sample reference image 892 is similar to the relationship between the target image and the reference image, and a more specific description can be found in FIG. 3 .
在一些实施例中,样本目标图像891和至少一幅样本参考图像892间对应的样本校正图像893可以由用户手动标记(例如,医生手动修改或编辑),也可以由图像处理系统100自动标记。In some embodiments, the corresponding sample corrected image 893 between the sample target image 891 and at least one sample reference image 892 can be manually marked by a user (eg, manually modified or edited by a doctor), or can be automatically marked by the image processing system 100 .
在一些实施例中,处理设备140(或其他处理设备)将样本目标图像891和至少一幅样本参考图像892作为输入,以对应的样本校正图像893作为监督,对第二模型870进行训练,通过学习算法(例如,随机梯度下降法)更新第二模型870的参数,以最小化第二损失函数, 直到模型训练完成;或迭代训练次数达到一定次数后则停止训练。In some embodiments, the processing device 140 (or other processing device) takes the sample target image 891 and at least one sample reference image 892 as input, and uses the corresponding sample corrected image 893 as supervision to train the second model 870 by The learning algorithm (for example, stochastic gradient descent method) updates the parameters of the second model 870 to minimize the second loss function until the model training is completed; or the training is stopped after the number of iteration training reaches a certain number of times.
在一些实施例中,第二损失函数可以是感知损失函数。在一些实施例中,第二损失函数还可以是其他损失函数,例如,平方损失函数、逻辑回归损失函数等。In some embodiments, the second loss function may be a perceptual loss function. In some embodiments, the second loss function may also be other loss functions, for example, a square loss function, a logistic regression loss function, and the like.
本说明书一些实施例中,(1)通过不同成像方式获取目标图像和参考图像,基于至少一幅参考图像与目标图像间的差异,校正目标图像,可以在保留目标图像的目标特征的同时,调整目标图像的其他特征接近参考图像的相应特征,从而丰富目标图像的特征,提高目标图像的图像质量;(2)通过机器学习模型或构建差异模型的方式确定参考图像和目标图像间的差异,可以提升确定差异的准确性、高效性和全面综合性;(3)通过对多幅参考图像的综合处理,可以满足不同的图像处理要求,且提高图像校正效果。In some embodiments of this specification, (1) the target image and the reference image are acquired through different imaging methods, and the target image is corrected based on the difference between at least one reference image and the target image, which can be adjusted while retaining the target features of the target image. Other features of the target image are close to the corresponding features of the reference image, thereby enriching the features of the target image and improving the image quality of the target image; (2) determining the difference between the reference image and the target image by using a machine learning model or constructing a difference model, which can Improve the accuracy, efficiency and comprehensiveness of determining the difference; (3) through the comprehensive processing of multiple reference images, different image processing requirements can be met, and the image correction effect can be improved.
本说明一些实施例还提供了一种图像处理装置,该装置包括:至少一个存储介质,存储计算机指令;至少一个处理器,执行该计算机指令,以实现本说明书所述的图像处理方法,有关更多技术细节可参见图1至图8B的相关描述,在此不再赘述。Some embodiments of this specification also provide an image processing device, which includes: at least one storage medium storing computer instructions; at least one processor executing the computer instructions to implement the image processing method described in this specification. For many technical details, refer to the relevant descriptions in FIG. 1 to FIG. 8B , and details are not repeated here.
本说明一些实施例还提供了一种计算机可读存储介质,该存储介质存储计算机指令,当计算机读取该计算机指令时,计算机执行本说明书所述的图像处理方法,有关更多技术细节可参见图1至图8B的相关描述,在此不再赘述。Some embodiments of this specification also provide a computer-readable storage medium, which stores computer instructions. When a computer reads the computer instructions, the computer executes the image processing method described in this specification. For more technical details, please refer to The relevant descriptions of FIG. 1 to FIG. 8B are not repeated here.
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。The basic concept has been described above, obviously, for those skilled in the art, the above detailed disclosure is only an example, and does not constitute a limitation to this description. Although not expressly stated here, those skilled in the art may make various modifications, improvements and corrections to this description. Such modifications, improvements and corrections are suggested in this specification, so such modifications, improvements and corrections still belong to the spirit and scope of the exemplary embodiments of this specification.
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。Meanwhile, this specification uses specific words to describe the embodiments of this specification. For example, "one embodiment", "an embodiment", and/or "some embodiments" refer to a certain feature, structure or characteristic related to at least one embodiment of this specification. Therefore, it should be emphasized and noted that two or more references to "an embodiment" or "an embodiment" or "an alternative embodiment" in different places in this specification do not necessarily refer to the same embodiment . In addition, certain features, structures or characteristics in one or more embodiments of this specification may be properly combined.
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。In addition, unless explicitly stated in the claims, the order of processing elements and sequences described in this specification, the use of numbers and letters, or the use of other names are not used to limit the sequence of processes and methods in this specification. While the foregoing disclosure has discussed by way of various examples some embodiments of the invention that are presently believed to be useful, it should be understood that such detail is for illustrative purposes only and that the appended claims are not limited to the disclosed embodiments, but rather, the claims The claims are intended to cover all modifications and equivalent combinations that fall within the spirit and scope of the embodiments of this specification. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by a software-only solution, such as installing the described system on an existing server or mobile device.
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。In the same way, it should be noted that in order to simplify the expression disclosed in this specification and help the understanding of one or more embodiments of the invention, in the foregoing description of the embodiments of this specification, sometimes multiple features are combined into one embodiment, drawings or descriptions thereof. This method of disclosure does not, however, imply that the subject matter of the specification requires more features than are recited in the claims. Indeed, embodiment features are less than all features of a single foregoing disclosed embodiment.
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。In some embodiments, numbers describing the quantity of components and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifiers "about", "approximately" or "substantially" in some examples. grooming. Unless otherwise stated, "about", "approximately" or "substantially" indicates that the stated figure allows for a variation of ±20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that can vary depending upon the desired characteristics of individual embodiments. In some embodiments, numerical parameters should take into account the specified significant digits and adopt the general digit reservation method. Although the numerical ranges and parameters used in some embodiments of this specification to confirm the breadth of the range are approximations, in specific embodiments, such numerical values are set as precisely as practicable.
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。Each patent, patent application, patent application publication, and other material, such as article, book, specification, publication, document, etc., cited in this specification is hereby incorporated by reference in its entirety. Application history documents that are inconsistent with or conflict with the content of this specification are excluded, and documents (currently or later appended to this specification) that limit the broadest scope of the claims of this specification are also excluded. It should be noted that if there is any inconsistency or conflict between the descriptions, definitions, and/or terms used in the accompanying materials of this manual and the contents of this manual, the descriptions, definitions and/or terms used in this manual shall prevail .
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。Finally, it should be understood that the embodiments described in this specification are only used to illustrate the principles of the embodiments of this specification. Other modifications are also possible within the scope of this description. Therefore, by way of example and not limitation, alternative configurations of the embodiments of this specification may be considered consistent with the teachings of this specification. Accordingly, the embodiments of this specification are not limited to the embodiments explicitly introduced and described in this specification.

Claims (28)

  1. 一种图像处理系统,包括:An image processing system comprising:
    存储设备,存储计算机指令;storage devices, which store computer instructions;
    处理器,与所述存储设备相连接,当执行所述计算机指令时,所述处理器使所述系统执行下述操作:A processor, connected to the storage device, when executing the computer instructions, the processor causes the system to perform the following operations:
    获取对象的目标图像;Get the target image of the object;
    获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及acquiring at least one reference image of the object, the at least one reference image corresponding to an imaging device different from the imaging device corresponding to the target image; and
    基于所述至少一幅参考图像,校正所述目标图像。The target image is corrected based on the at least one reference image.
  2. 根据权利要求1所述的系统,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The system of claim 1, wherein said correcting said target image based on said at least one reference image comprises:
    预处理所述目标图像;以及preprocessing the target image; and
    基于所述至少一幅参考图像,校正所述预处理后的目标图像。Correcting the preprocessed target image based on the at least one reference image.
  3. 根据权利要求1所述的系统,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The system of claim 1, wherein said correcting said target image based on said at least one reference image comprises:
    确定所述目标图像与所述至少一幅参考图像的差异;以及determining a difference between the target image and the at least one reference image; and
    基于所述差异,校正所述目标图像。Based on the difference, the target image is corrected.
  4. 根据权利要求3所述的系统,其中,所述差异包括所述对象的同一位置点在所述目标图像和所述至少一幅参考图像中的空间位置差异。The system according to claim 3, wherein the difference comprises a spatial position difference of the same position point of the object in the target image and the at least one reference image.
  5. 根据权利要求3所述的系统,其中,所述确定所述目标图像与所述至少一幅参考图像的差异包括:The system according to claim 3, wherein said determining the difference between said target image and said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像进行配准;以及registering the at least one reference image and the target image; and
    基于所述至少一幅参考图像和所述目标图像的第一配准结果,确定所述差异。The difference is determined based on a first registration result of the at least one reference image and the target image.
  6. 根据权利要求3所述的系统,其中,所述确定所述目标图像与所述至少一幅参考图像的差异包括:The system according to claim 3, wherein said determining the difference between said target image and said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像输入训练好的机器学习模型;以及inputting said at least one reference image and said target image into a trained machine learning model; and
    基于所述机器学习模型的输出,确定所述差异。Based on the output of the machine learning model, the difference is determined.
  7. 根据权利要求3所述的系统,其中,所述确定所述目标图像与所述至少一幅参考图像的差异包括:The system according to claim 3, wherein said determining the difference between said target image and said at least one reference image comprises:
    对于所述对象的多个位置点中的每一个,确定所述位置点在所述目标图像和所述至少一幅参考图像中的点差异;以及For each of a plurality of location points of the object, determining a point difference of the location point in the target image and the at least one reference image; and
    基于所述多个位置点分别对应的多个点差异,通过构建差异模型以确定所述差异。Based on the multiple point differences respectively corresponding to the multiple position points, the difference is determined by constructing a difference model.
  8. 根据权利要求1所述的系统,其中,所述至少一幅参考图像由不同的成像设备采集。The system of claim 1, wherein the at least one reference image is acquired by a different imaging device.
  9. 根据权利要求1所述的系统,其中,The system of claim 1, wherein,
    所述至少一幅参考图像包括多幅参考图像;以及the at least one reference image includes a plurality of reference images; and
    所述基于所述至少一幅参考图像,校正所述目标图像包括:The correcting the target image based on the at least one reference image includes:
    确定所述目标图像与所述多幅参考图像中的每一幅之间的差异;determining differences between the target image and each of the plurality of reference images;
    对所述多幅参考图像分别对应的差异进行综合处理;以及performing comprehensive processing on the differences respectively corresponding to the plurality of reference images; and
    基于综合处理结果,校正所述目标图像。Based on the comprehensive processing result, the target image is corrected.
  10. 根据权利要求1所述的系统,其中,The system of claim 1, wherein,
    所述至少一幅参考图像包括多幅参考图像;以及the at least one reference image includes a plurality of reference images; and
    所述基于所述至少一幅参考图像,校正所述目标图像包括:The correcting the target image based on the at least one reference image includes:
    确定所述目标图像与所述多幅参考图像中的每一幅之间的差异;determining differences between the target image and each of the plurality of reference images;
    基于所述差异,校正所述目标图像以确定中间校正图像;以及based on the difference, correcting the target image to determine an intermediate corrected image; and
    基于多幅中间校正图像,确定校正后的目标图像。Based on the plurality of intermediate corrected images, a corrected target image is determined.
  11. 根据权利要求1所述的系统,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The system of claim 1, wherein said correcting said target image based on said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像进行配准;以及registering the at least one reference image and the target image; and
    基于所述至少一幅参考图像和所述目标图像的第二配准结果,确定校正后的目标图像。Based on the second registration result of the at least one reference image and the target image, a corrected target image is determined.
  12. 根据权利要求1所述的系统,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The system of claim 1, wherein said correcting said target image based on said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像输入训练好的机器学习模型;以及inputting said at least one reference image and said target image into a trained machine learning model; and
    基于所述机器学习模型的输出,确定校正后的目标图像。Based on the output of the machine learning model, a corrected target image is determined.
  13. 根据权利要求1所述的系统,其中,The system of claim 1, wherein,
    所述目标图像包括磁共振(magnetic resonance imaging,MRI)图像;以及The target image comprises a magnetic resonance (magnetic resonance imaging, MRI) image; and
    所述至少一幅参考图像包括计算机断层扫描图像(computed tomography,CT)图像。The at least one reference image includes a computed tomography (CT) image.
  14. 一种图像处理方法,包括:An image processing method, comprising:
    获取对象的目标图像;Get the target image of the object;
    获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及acquiring at least one reference image of the object, the at least one reference image corresponding to an imaging device different from the imaging device corresponding to the target image; and
    基于所述至少一幅参考图像,校正所述目标图像。The target image is corrected based on the at least one reference image.
  15. 根据权利要求14所述的方法,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The method according to claim 14, wherein said correcting said target image based on said at least one reference image comprises:
    预处理所述目标图像;以及preprocessing the target image; and
    基于所述至少一幅参考图像,校正所述预处理后的目标图像。Correcting the preprocessed target image based on the at least one reference image.
  16. 根据权利要求14所述的方法,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The method according to claim 14, wherein said correcting said target image based on said at least one reference image comprises:
    确定所述目标图像与所述至少一幅参考图像的差异;以及determining a difference between the target image and the at least one reference image; and
    基于所述差异,校正所述目标图像。Based on the difference, the target image is corrected.
  17. 根据权利要求16所述的方法,其中,所述差异包括所述对象的同一位置点在所述目标图像和所述至少一幅参考图像中的空间位置差异。The method according to claim 16, wherein the difference comprises a spatial position difference of the same position point of the object in the target image and the at least one reference image.
  18. 根据权利要求16所述的方法,其中,所述确定所述目标图像与所述至少一幅参考图像的差异包括:The method according to claim 16, wherein said determining the difference between said target image and said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像进行配准;以及registering the at least one reference image and the target image; and
    基于所述至少一幅参考图像和所述目标图像的第一配准结果,确定所述差异。The difference is determined based on a first registration result of the at least one reference image and the target image.
  19. 根据权利要求16所述的方法,其中,所述确定所述目标图像与所述至少一幅参考图 像的差异包括:The method of claim 16, wherein said determining the difference between said target image and said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像输入训练好的机器学习模型;以及inputting said at least one reference image and said target image into a trained machine learning model; and
    基于所述机器学习模型的输出,确定所述差异。Based on the output of the machine learning model, the difference is determined.
  20. 根据权利要求16所述的方法,其中,所述确定所述目标图像与所述至少一幅参考图像的差异包括:The method according to claim 16, wherein said determining the difference between said target image and said at least one reference image comprises:
    对于所述对象的多个位置点中的每一个,确定所述位置点在所述目标图像和所述至少一幅参考图像中的点差异;以及For each of a plurality of location points of the object, determining a point difference of the location point in the target image and the at least one reference image; and
    基于所述多个位置点分别对应的多个点差异,通过构建差异模型以确定所述差异。Based on the multiple point differences respectively corresponding to the multiple position points, the difference is determined by constructing a difference model.
  21. 根据权利要求14所述的方法,其中,所述至少一幅参考图像由不同的成像设备采集。The method of claim 14, wherein the at least one reference image is acquired by a different imaging device.
  22. 根据权利要求14所述的方法,其中,The method of claim 14, wherein,
    所述至少一幅参考图像包括多幅参考图像;以及the at least one reference image includes a plurality of reference images; and
    所述基于所述至少一幅参考图像,校正所述目标图像包括:The correcting the target image based on the at least one reference image includes:
    确定所述目标图像与所述多幅参考图像中的每一幅之间的差异;determining differences between the target image and each of the plurality of reference images;
    对所述多幅参考图像分别对应的差异进行综合处理;以及performing comprehensive processing on the differences respectively corresponding to the plurality of reference images; and
    基于综合处理结果,校正所述目标图像。Based on the comprehensive processing result, the target image is corrected.
  23. 根据权利要求14所述的方法,其中,The method of claim 14, wherein,
    所述至少一幅参考图像包括多幅参考图像;以及the at least one reference image includes a plurality of reference images; and
    所述基于所述至少一幅参考图像,校正所述目标图像包括:The correcting the target image based on the at least one reference image includes:
    确定所述目标图像与所述多幅参考图像中的每一幅之间的差异;determining differences between the target image and each of the plurality of reference images;
    基于所述差异,校正所述目标图像以确定中间校正图像;以及based on the difference, correcting the target image to determine an intermediate corrected image; and
    基于多幅中间校正图像,确定校正后的目标图像。Based on the plurality of intermediate corrected images, a corrected target image is determined.
  24. 根据权利要求14所述的方法,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The method according to claim 14, wherein said correcting said target image based on said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像进行配准;以及registering the at least one reference image and the target image; and
    基于所述至少一幅参考图像和所述目标图像的第二配准结果,确定校正后的目标图像。Based on the second registration result of the at least one reference image and the target image, a corrected target image is determined.
  25. 根据权利要求14所述的方法,其中,所述基于所述至少一幅参考图像,校正所述目标图像包括:The method according to claim 14, wherein said correcting said target image based on said at least one reference image comprises:
    将所述至少一幅参考图像和所述目标图像输入训练好的机器学习模型;以及inputting said at least one reference image and said target image into a trained machine learning model; and
    基于所述机器学习模型的输出,确定校正后的目标图像。Based on the output of the machine learning model, a corrected target image is determined.
  26. 根据权利要求14所述的方法,其中,The method of claim 14, wherein,
    所述目标图像包括磁共振(magnetic resonance imaging,MRI)图像;以及The target image comprises a magnetic resonance (magnetic resonance imaging, MRI) image; and
    所述至少一幅参考图像包括计算机断层扫描图像(computed tomography,CT)图像。The at least one reference image includes a computed tomography (CT) image.
  27. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取所述计算机指令,所述计算机执行一种图像处理方法,包括:A computer-readable storage medium, the storage medium stores computer instructions, and when a computer reads the computer instructions, the computer executes an image processing method, including:
    获取对象的目标图像;Get the target image of the object;
    获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及acquiring at least one reference image of the object, the at least one reference image corresponding to an imaging device different from the imaging device corresponding to the target image; and
    基于所述至少一幅参考图像,校正所述目标图像。The target image is corrected based on the at least one reference image.
  28. 一种图像处理系统,包括:An image processing system comprising:
    目标图像获取模块,用于获取对象的目标图像;A target image acquisition module, configured to acquire a target image of an object;
    参考图像获取模块,用于获取所述对象的至少一幅参考图像,所述至少一幅参考图像对应的成像设备不同于所述目标图像对应的成像设备;以及A reference image acquisition module, configured to acquire at least one reference image of the object, the imaging device corresponding to the at least one reference image is different from the imaging device corresponding to the target image; and
    校正模块,用于基于所述至少一幅参考图像,校正所述目标图像。A correction module, configured to correct the target image based on the at least one reference image.
PCT/CN2021/136183 2021-12-07 2021-12-07 Image processing method and system WO2023102749A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/136183 WO2023102749A1 (en) 2021-12-07 2021-12-07 Image processing method and system
CN202180102179.XA CN117940958A (en) 2021-12-07 2021-12-07 Image processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/136183 WO2023102749A1 (en) 2021-12-07 2021-12-07 Image processing method and system

Publications (1)

Publication Number Publication Date
WO2023102749A1 true WO2023102749A1 (en) 2023-06-15

Family

ID=86729513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136183 WO2023102749A1 (en) 2021-12-07 2021-12-07 Image processing method and system

Country Status (2)

Country Link
CN (1) CN117940958A (en)
WO (1) WO2023102749A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107638188A (en) * 2017-09-28 2018-01-30 江苏赛诺格兰医疗科技有限公司 Image attenuation bearing calibration and device
US20210065412A1 (en) * 2018-01-27 2021-03-04 Uih America, Inc. Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
CN113349809A (en) * 2020-03-05 2021-09-07 高健 Image reconstruction method of multi-modal imaging system
CN113450397A (en) * 2021-06-25 2021-09-28 广州柏视医疗科技有限公司 Image deformation registration method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107638188A (en) * 2017-09-28 2018-01-30 江苏赛诺格兰医疗科技有限公司 Image attenuation bearing calibration and device
US20210065412A1 (en) * 2018-01-27 2021-03-04 Uih America, Inc. Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
CN113349809A (en) * 2020-03-05 2021-09-07 高健 Image reconstruction method of multi-modal imaging system
CN113450397A (en) * 2021-06-25 2021-09-28 广州柏视医疗科技有限公司 Image deformation registration method based on deep learning

Also Published As

Publication number Publication date
CN117940958A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
JP6581104B2 (en) System and method for image reconstruction and correction using data and models
WO2021233315A1 (en) Image quality optimization method, and system
JP5422742B2 (en) Medical image processing apparatus and method
CN103514629B (en) Method and apparatus for iterative approximation
US11436720B2 (en) Systems and methods for generating image metric
US11874359B2 (en) Fast diffusion tensor MRI using deep learning
Pluim et al. The truth is hard to make: Validation of medical image registration
US20020122576A1 (en) Method and device for the registration of images
CN111340903B (en) Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image
US11950877B2 (en) System and method for fully automatic LV segmentation of myocardial first-pass perfusion images
US20240005508A1 (en) Systems and methods for image segmentation
WO2023102749A1 (en) Image processing method and system
WO2020198456A1 (en) Retrospective motion correction using a combined neural network and model-based image reconstruction of magnetic resonance data
Iddrisu et al. 3D reconstructions of brain from MRI scans using neural radiance fields
US20210239780A1 (en) Estimating diffusion metrics from diffusion-weighted magnetic resonance images using optimized k-q space sampling and deep learning
Qiao et al. CorGAN: Context aware recurrent generative adversarial network for medical image generation
CN114373029A (en) Motion correction method and system for PET image
CN111784732A (en) Method and system for training cardiac motion field estimation model and cardiac motion field estimation
Zhang et al. Sparse constrained transformation model based on radial basis function expansion: application to cardiac and brain image registration
US11672498B2 (en) Information processing method, medical image diagnostic apparatus, and information processing system
Hu et al. Automatic image registration of optoacoustic tomography and magnetic resonance imaging based on deep learning
CN112529919B (en) System and method for generating bullseye chart generation of a subject's heart
CN116728420B (en) Mechanical arm regulation and control method and system for spinal surgery
US11854126B2 (en) Methods and apparatus for deep learning based image attenuation correction
US11461940B2 (en) Imaging method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21966673

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180102179.X

Country of ref document: CN