WO2023202285A1 - 图像处理方法、装置、计算机设备及存储介质 - Google Patents

图像处理方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2023202285A1
WO2023202285A1 PCT/CN2023/081924 CN2023081924W WO2023202285A1 WO 2023202285 A1 WO2023202285 A1 WO 2023202285A1 CN 2023081924 W CN2023081924 W CN 2023081924W WO 2023202285 A1 WO2023202285 A1 WO 2023202285A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
adjusted
artifact
information
image processing
Prior art date
Application number
PCT/CN2023/081924
Other languages
English (en)
French (fr)
Inventor
王红
李悦翔
郑冶枫
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023202285A1 publication Critical patent/WO2023202285A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the embodiments of the present application relate to the field of computer technology, and in particular, to an image processing method, device, computer equipment, and storage medium.
  • Computed Tomography can detect tissue and organ structures in the human body non-destructively, so it is widely used in the medical field.
  • computed tomography can collect CT images, affected by metal implants in the human body, metal artifacts will appear in the collected CT images, affecting the quality of the CT images.
  • image processing models are used to remove metal artifacts in CT images, but the current image processing model has poor effectiveness in removing metal artifacts.
  • Embodiments of the present application provide an image processing method, device, computer equipment and storage medium, which improves the effect of removing metal artifacts.
  • the technical solutions are as follows:
  • an image processing method includes:
  • the computer device acquires an image processing model, the image processing model includes structural parameters, the structural parameters represent the structure of the metal artifact, the structural parameters include a first area structure parameter of the first area and a second area structure of the second area. Parameters, the first area is any area in the metal artifact, and the second area is an area in the metal artifact other than the first area;
  • the computer device adjusts the first region structure parameters based on training samples to obtain adjusted first region structure parameters; based on the distance between the first region and the second region Angle difference, adjust the adjusted first regional structural parameter, and determine the obtained regional structural parameter as the adjusted second regional structural parameter;
  • an image processing device includes:
  • a model acquisition module used to acquire an image processing model.
  • the image processing model includes structural parameters.
  • the structural parameters represent the structure of the metal artifact.
  • the structural parameters include a first region structural parameter of the first region and a second region. Second region structural parameters, the first region is any region in the metal artifact, and the second region is a region in the metal artifact other than the first region;
  • a model training module configured to adjust the first region structure parameters based on training samples when training the image processing model to obtain adjusted first region structure parameters; based on the relationship between the first region and the second region , adjust the adjusted first regional structural parameter, and determine the obtained regional structural parameter as the adjusted second regional structural parameter;
  • the image processing model after training is used to remove metal artifacts in any medical image based on the adjusted structural parameters.
  • the first region structure parameter is represented by a product of a weight coefficient and a first original region structure parameter of the first region; the model training module is used to train the image processing model when, based on the The training sample adjusts the weight coefficient and the first original area structure parameter to obtain the adjusted weight coefficient and the adjusted first original area structure parameter.
  • the adjusted first area structure parameter is obtained by the adjusted first area structure parameter. Expressed by the product of the weight coefficient and the adjusted first original region structure parameter.
  • each of the regions in the metal artifacts includes at least one strip artifact, and the first original region structural parameter of the first region is a matrix, and the matrix is used to represent first area;
  • the model training module is used for:
  • the target value indicates that the target position does not correspond to any sub-region in the first area.
  • the non-target position in the adjusted matrix is consistent with the first area. Multiple sub-regions in a region correspond respectively.
  • the non-target position refers to other positions in the adjusted matrix except the target position where the target value is located.
  • the elements in the non-target position represent the Whether the corresponding sub-region in the first region contains a strip artifact, and the strip artifact if the sub-region contains the strip artifact.
  • model training module is used for:
  • the adjusted first region structure parameter is adjusted, and the obtained region structure parameter is determined as the adjusted second region structure parameter.
  • the first region structure parameter is a matrix, and elements at non-target positions in the matrix indicate whether the corresponding sub-region in the first region contains strip artifacts, and The bar artifact when the sub-region contains the bar artifact, the non-target position refers to other positions in the matrix other than the target position where the target value is located;
  • Model training module for:
  • the image processing model further includes position extraction parameters
  • the model training module is further configured to adjust the position extraction parameters based on the training samples when training the image processing model
  • the trained image processing model is used to remove metal artifacts in any medical image based on the adjusted structural parameters and adjusted position extraction parameters.
  • the device further includes:
  • An image processing module configured to call the trained image processing model, perform location extraction on the medical image based on the location extraction parameters, and obtain multiple regional location information, each of which represents the medical image.
  • the image processing module includes:
  • a position gradient determination unit configured to determine regional position gradient information of the plurality of regions by comparing the medical image with the target image, where the regional position gradient information indicates the change amplitude of the regional position information
  • a location information determination unit configured to respectively adjust the multiple regional location information based on the multiple regional location gradient information to obtain the adjusted multiple regional location information
  • An artifact removal unit configured to determine adjusted first artifact information based on the adjusted plurality of area position information, the adjusted first area structure parameter, and the adjusted second area structure parameter. , based on the adjusted first artifact information, perform artifact removal on the medical image until a target number of target images are obtained, and determine the last target image obtained as the medical image to remove the metal artifact image after.
  • the position gradient determination unit is used for:
  • regional position gradient information of the plurality of regions is determined respectively.
  • the position gradient determination unit is used for:
  • regional position gradient information of the plurality of regions is determined respectively.
  • the image processing model includes a position extraction network and an artifact removal network; the device further includes:
  • the image processing module is used to call the position extraction network to perform position extraction on the medical image to obtain regional position information of multiple regions in the metal artifact; to call the artifact removal network to perform position extraction based on multiple regional positions.
  • information, the adjusted first region structure parameter and the adjusted second region structure parameter determine the first artifact information, and perform artifacts on the medical image based on the first artifact information. Remove to get the target image.
  • a computer device includes a processor and a memory. At least one computer program is stored in the memory. The at least one computer program is loaded and executed by the processor to implement the following: Operations performed by the image processing method described in the above aspects.
  • a computer-readable storage medium is provided. At least one computer program is stored in the computer-readable storage medium. The at least one computer program is loaded and executed by a processor to implement the above aspect. The operations performed by image processing methods.
  • a computer program product including a computer program that, when executed by a processor, implements the operations performed by the image processing method described in the above aspect.
  • the technical solution provided by the embodiments of the present application is based on the structural characteristic that metal artifacts include multiple regions with rotational symmetry.
  • the training samples are first used to adjust the first region of the first region.
  • Regional structural parameters obtain the adjusted first regional structural parameters, then continue to adjust the first regional structural parameters, determine the adjusted regional structural parameters as the adjusted second regional structural parameters of the second region, and set the metal
  • the structural characteristics of artifacts are a priori knowledge when removing metal artifacts.
  • the structural characteristics that metal artifacts are multiple regions with rotational symmetry are fully considered, which can improve the effect of image processing models in removing metal artifacts.
  • Figure 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • Figure 2 is a flow chart of an image processing method provided by an embodiment of the present application.
  • Figure 3 is a flow chart of another image processing method provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of a medical image provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of a model structure provided by an embodiment of the present application.
  • Figure 7 is a flow chart of yet another image processing method provided by an embodiment of the present application.
  • Figure 9 is a flow chart of yet another image processing method provided by an embodiment of the present application.
  • Figure 10 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • Figure 11 is a schematic structural diagram of another image processing device provided by an embodiment of the present application.
  • Figure 12 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • first, second, etc. used in this application may be used to describe various concepts herein, but unless In particular, these concepts are not limited by these terms. These terms are only used to distinguish one concept from another.
  • first arrangement order may be called a second arrangement order
  • second arrangement order may be called a first arrangement order
  • At least one includes one, two or more than two, and multiple includes two or more, each A refers to each of the corresponding plurality, and any refers to any one of the plurality.
  • multiple angles include 3 angles, and each angle refers to each of the 3 angles. Any one refers to any one of the 3 angles, which can be the first one or the third angle. Two, or maybe a third.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technology of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is the study of the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive subject that covers a wide range of fields, including both hardware-level technology and software-level technology.
  • Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, mechatronics and other technologies.
  • Artificial intelligence software technology includes computer vision technology, speech processing technology, natural language processing technology, machine learning/deep learning, autonomous driving, smart transportation and other major directions.
  • Computer vision technology (Computer Vision, CV) is a science that studies how to make machines "see”. Furthermore, it refers to using cameras and computers to replace human eyes to identify and measure targets and other machine vision, and further to do graphics. Processing, so that computer processing becomes an image more suitable for human eye observation or transmitted to instrument detection. As a scientific discipline, computer vision studies related theories and technologies, trying to build artificial intelligence systems that can obtain information from images or multi-dimensional data.
  • Computer vision technology usually includes image processing, image recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition, optical character recognition), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D (3 Dimensions) , three-dimensional) technology, virtual reality, augmented reality, simultaneous positioning and map construction, autonomous driving, smart transportation and other technologies, as well as common biometric identification technologies such as face recognition and fingerprint recognition.
  • artificial intelligence technology has been researched and applied in many fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, driverless driving, autonomous driving, and drones. , robots, smart medical care, smart customer service, Internet of Vehicles, autonomous driving, smart transportation, etc. It is believed that with the development of technology, artificial intelligence technology will be applied in more fields and play an increasingly important role.
  • the image processing method provided by the embodiment of the present application uses computer vision technology and machine learning technology in artificial intelligence to perform artifact processing on medical images that include metal artifacts, and obtain images after removing metal artifacts.
  • the computer device is a terminal or a server.
  • the server is an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or it provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud communications, middleware services, domain name services, security services, CDN (Content Delivery Network, content distribution network), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal is a smartphone, tablet, laptop, desktop computer, smart speaker, smart watch, etc., but is not limited thereto.
  • the computer program involved in the embodiment of the present application may be deployed and executed on one computer device, or executed on multiple computer devices located in one location, or distributed in multiple locations. Execution on multiple computer devices interconnected by a communications network, multiple computers distributed in multiple locations and interconnected by a communications network Devices can form a blockchain system.
  • the computer device used to train the image processing model in the embodiment of the present application is a node in the blockchain system.
  • the node can store the trained image processing model in the blockchain, and then the node Or the nodes corresponding to other devices in the blockchain can remove metal artifacts in the image based on the image processing model.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment includes a terminal 101 and a server 102.
  • the terminal 101 and the server 102 are connected through a wireless or wired network.
  • a target application provided by the server 102 is installed on the terminal 101.
  • the terminal 101 can implement functions such as data transmission and image processing through the target application.
  • the target application is an image processing application capable of removing metal artifacts in CT images.
  • the server 102 trains an image processing model, which is used to remove metal artifacts in the image.
  • the server 102 sends the trained image processing model to the terminal 101.
  • the terminal 101 stores the received image processing model, and can subsequently use it based on This image processing model processes any medical image including metal artifacts to obtain an image with metal artifacts removed.
  • the image processing method provided by the embodiments of this application can be applied to a variety of scenarios. For example, in the medical field, scanning a patient can obtain a CT image of the patient, and the doctor can determine the patient's status based on the patient's CT image and other relevant information about the patient. But if the patient has a metal implant in his body when the patient is scanned, metal artifacts will appear in the CT image. These metal artifacts will not only reduce the quality of the CT image, but also adversely affect the doctor's diagnostic process. . Therefore, the image processing method provided by the embodiment of the present application can be used to remove metal artifacts in CT images and improve the quality of CT images, thereby providing accurate auxiliary information for doctors during clinical diagnosis.
  • FIG. 2 is a flow chart of an image processing method provided by an embodiment of the present application.
  • the execution subject of the embodiments of this application is a computer device. Referring to Figure 2, the method includes the following steps:
  • the computer device acquires an image processing model.
  • the image processing model includes structural parameters.
  • the structural parameters represent the structure of the metal artifact.
  • the structural parameters include a first region structural parameter of the first region and a second region structural parameter of the second region.
  • the first area is any area in the metal artifact
  • the second area is an area in the metal artifact other than the first area.
  • the image processing model is used to remove metal artifacts in medical images.
  • Metal artifacts refer to the noise information caused by metal in the process of generating medical images.
  • the medical image includes metal artifacts and the metal that causes them.
  • medical images are CT images obtained by scanning target objects through computed tomography.
  • Metal artifacts in CT images are due to the absorption and reflection of X-rays by metals in or on the body surface of the target object, causing artifacts around the metal and throughout the CT Noise produced in the image.
  • the metal artifact is a rotationally symmetrical strip structure.
  • the metal artifact includes a plurality of rotationally symmetrical regions. Each region contains at least one stripe.
  • the first region and the second region are rotationally symmetrical. .
  • the structure of the metal artifact is a characteristic of the metal artifact itself. For different medical images, the structure of the metal artifact in different medical images is the same. Therefore, for the image processing model, the structural parameters are set in the image processing model, and the structural parameters are trained so that the structural parameters can accurately represent the structure of the metal artifact.
  • the first region structure parameter represents the structure of the first region in the metal artifact
  • the second region structure parameter represents the structure of the second region in the metal artifact.
  • the computer device adjusts the structural parameters of the first region based on the training samples to obtain the adjusted structural parameters of the first region, and adjusts the adjusted structural parameters based on the angle of the first region and the angle of the second region.
  • the first regional structure parameter determines the obtained regional structural parameter as the adjusted second regional structural parameter, and the trained image processing model is used to remove metal artifacts in any medical image based on the adjusted structural parameter.
  • the training samples include a sample medical image and a sample target image.
  • the sample medical image is an image containing metal artifacts
  • the sample target image is an image after removing metal artifacts from the sample medical image.
  • the structural parameters of the image processing model include first regional structural parameters of the first region and second regional structural parameters of the second region.
  • the computer device first adjusts the first regional structural parameters using training samples to obtain adjusted first regional structural parameters.
  • multiple areas in the metal artifact are rotationally symmetrical, that is, the shapes of the multiple areas are the same, first There is an angle difference between the region and the second region, that is, by rotating the first region by the angle difference, the second region can be obtained. Therefore, when the structural parameters of the first region of the first region are adjusted, the structural characteristic that the first region and the second region are rotationally symmetrical can be used to adjust the angle difference between the first region and the second region.
  • adjust the adjusted first region structure parameter, and the obtained region structure parameter can be used as the adjusted second region structure parameter of the second region. Therefore, during the process of training the image processing model, the computer device can first adjust the first region structural parameters, and then use the adjusted first region structural parameters to determine the adjusted second region structural parameters, thereby improving training efficiency.
  • the trained image processing model includes adjusted structural parameters, and the adjusted structural parameters include adjusted first region structure parameters and adjusted second region structure parameters.
  • the adjusted first region structure mentioned here Parameters refer to the regional structure parameters obtained by adjusting the structural parameters of the first region based on the training samples.
  • the adjusted second region structural parameters mentioned here refer to the adjusted structural parameters of the first region obtained by adjusting again. regional structural parameters.
  • the method provided by the embodiment of the present application is based on the structural characteristic that metal artifacts contain multiple regions with rotational symmetry.
  • the first region of the first region is first adjusted using training samples. Structural parameters, obtain the adjusted structural parameters of the first region, and then continue to adjust the structural parameters of the first region, determine the adjusted structural parameters of the region as the adjusted second region structural parameters of the second region, and convert the metal pseudo
  • the structural characteristics of the shadow are used as a priori knowledge when removing metal artifacts, and the structural characteristics that metal artifacts are multiple regions with rotational symmetry are fully considered, which can improve the effect of the image processing model in removing metal artifacts.
  • FIG 3 is a flow chart of another image processing method provided by an embodiment of the present application.
  • the execution subject of the embodiment of the present application is a computer device. Referring to Figure 3, the method includes the following steps.
  • the computer device acquires an image processing model, which includes structural parameters and position extraction parameters.
  • the image processing model is used to remove metal artifacts in medical images.
  • the image processing model is an untrained model, or a model that has been trained once or multiple times.
  • Metal artifact refers to the noise information caused by metal in the process of generating medical images, and the structural parameters represent the structure of the metal artifact.
  • the structure of metal artifacts belongs to the characteristics of the metal artifact itself. For different medical images, the structures of metal artifacts in different medical images are the same. Therefore, in the embodiment of the present application, it is set in the image processing model Structural parameters are obtained by training the image processing model.
  • the metal artifact is a rotationally symmetrical strip structure
  • the metal artifact can be divided into multiple rotationally symmetrical regions, each region containing at least one stripe. For these multiple regions, any two The shapes of the regions are similar, what differs is the angle in the metal artifacts. Therefore, for the image processing model, when setting the structural parameters in the image processing model, the first region structural parameters of the first region are set, and the second region structural parameters of the second region can be based on the relationship between the first region and the second region. The angle difference between is obtained by adjusting the structural parameters of the first region.
  • the first region structure parameter represents the structure of the first region in the metal artifact
  • the second region structure parameter represents the structure of the second region in the metal artifact.
  • the metal artifact is divided into multiple rotationally symmetric regions, and a reference bar in the metal artifact is determined.
  • the reference line refers to any bar in the metal artifact, among multiple
  • the target bars are determined in each area, and the positions of the target bars in each area are corresponding.
  • the target bar in the first area is the rightmost bar in the first area.
  • the target bar in the second area is the rightmost bar in the first area.
  • the target bar in the region is also the rightmost bar in the second region.
  • the angle between each target bar and the reference bar is determined as the angle of each area.
  • the difference between the angle of the first region and the angle of the second region is determined as the angle difference between the first region and the second region.
  • ⁇ l 2 ⁇ (l-1)/L
  • ⁇ l represents the angle of the l-th area in the metal artifact
  • L is the total number of divided areas in the metal artifact. For example, L is 8.
  • the computer device can also use other methods to determine the angle of each area, which is not limited by the embodiments of this application.
  • the position extraction parameters are used to extract the position information of metal artifacts in medical images.
  • the location of the metal may be different. Therefore, the position extraction parameters are set in the image processing model to extract metals from the medical images. Location information for the location.
  • the image processing model includes a position extraction network and an artifact removal network.
  • Network parameters of the position extraction network include the position extraction parameters.
  • the position extraction network is used to extract metals in medical images using the position extraction parameters.
  • the network parameters of the artifact removal network include the structural parameters, and the artifact removal network is used to remove metal artifacts in medical images based on position information and structural parameters.
  • the image processing model includes multiple image processing sub-models, and each image processing sub-model includes a position extraction network and an artifact removal network.
  • the computer device obtains training samples.
  • the training samples include a sample medical image and a sample target image.
  • the sample medical image is an image containing metal artifacts
  • the sample target image is an image after removing metal artifacts from the sample medical image.
  • the computer device directly acquires a sample target image, which is an image that does not include sample metal artifacts.
  • the computer device acquires artifact information, which includes position information of the metal and structural information of the metal.
  • the computer equipment uses a data simulation method to synthesize a sample medical image including metal artifacts based on the sample target image and artifact information as well as the imaging parameters of the CT equipment. The computer equipment then determines the sample medical image and the sample target image as a training sample.
  • the computer device acquires the sample medical image, and then uses methods other than the image processing model in this application to remove artifacts from the sample medical image to obtain the sample target image.
  • the computer device adjusts the pixel value of the image in the training sample, controls the pixel value of each pixel point within the range of [0, 1], and then adjusts the pixel value of each pixel point.
  • the value is converted to the range [0, 255].
  • the computer device crops the images of the training samples to the target size, and then randomly performs horizontal mirror flipping or vertical mirror flipping on each image, thereby increasing the diversity of images in the training samples.
  • the computer device adjusts the first region structure parameters and the position extraction parameters based on the training samples, and obtains the adjusted first region structure parameters and the adjusted position extraction parameters, based on the first region and the second region.
  • the angle difference between the two regions is adjusted to adjust the adjusted first region structure parameter, and the obtained region structure parameter is determined as the adjusted second region structure parameter.
  • the computer device calls an image processing model to process the sample medical image to obtain a predicted target image, and adjusts the first region structure parameters and position extraction parameters based on the predicted target image and the sample target image.
  • the structural parameters include a weight coefficient and a plurality of original regional structure parameters.
  • the product of the weight coefficient and the original regional structural parameter represents the regional structural parameters of a region, that is, for the first regional structural parameter
  • the first region structure parameter is represented by the product of the weight coefficient and the first original region structure parameter of the first region.
  • the weight coefficient and the first original area structure parameter of the first area are adjusted based on the training sample to obtain the adjusted weight coefficient and the adjusted first original area structure parameter, and then based on The adjusted weight coefficient and the adjusted first original region structure parameter determine the adjusted first region structure parameter.
  • the adjusted first region structure parameter is determined by the adjusted weight coefficient and the adjusted first original region structure.
  • the adjusted second area structure parameter is determined based on the adjusted weight coefficient, the angle difference between the first area and the second area, and the adjusted first original area structure parameter.
  • the second region structure parameter is represented by a product of the weight coefficient and the second original region structure parameter of the second region.
  • the computer device adjusts the adjusted weight coefficient and the adjusted first original area structural parameter again based on the angle difference between the first area and the second area, and obtains the adjusted weight coefficient and the adjusted first first area again.
  • the original region structure parameters determine the adjusted first original region structure parameters as the adjusted second original region structure parameters.
  • the adjusted second region structure parameters are determined by the adjusted weight coefficient and the adjusted second original region structure parameters. Product representation of regional structural parameters.
  • each area in the metal artifact includes at least one strip artifact, and the strip artifact refers to an artifact in the shape of a strip.
  • the first original area structure parameter of the first area is a matrix, and the matrix is used to represent the first area; after obtaining the matrix, the elements in the matrix that are not less than the reference value are adjusted to a target value, and the target value represents the location of the first area.
  • the target position does not correspond to any sub-area in the first area.
  • the non-target positions in the adjusted matrix correspond to multiple sub-areas in the first area respectively.
  • the non-target positions refer to the locations in the adjusted matrix other than the target value.
  • the reference value is determined based on the size and preset value of the convolution kernel representing the regional structure parameters in the image processing model. For example, if the size of the convolution kernel is p*p and the preset value is h, then the reference value is ((p+1)/2)h, where h is any value greater than 0, and p is an odd number.
  • the target value is a preset value, for example, the target value is 0 or other values.
  • the computer device determines a rotation parameter based on an angular difference between the first region and the second region, and the rotation parameter is used to adjust the first region structural parameter of the first region.
  • the rotation parameter is Wherein, ⁇ l represents the angle difference between the second region and the first region. When the angle of the first region is 0, the angle difference is the angle of the second region. Then based on the rotation parameter, the adjusted first region structure parameter is adjusted, and the obtained region structure parameter is determined as the adjusted second region structure parameter.
  • the first region structure parameter is a matrix, and the elements at non-target positions in the matrix indicate whether the corresponding sub-region in the first region contains a bar artifact, and whether the sub-region contains the bar artifact.
  • the shape of the bar artifact is a position other than the target position where the target value is located in the matrix shown in the non-target position.
  • the computer device Based on the rotation parameter, the computer device adjusts the position of the element in the adjusted first region structure parameter, so that the elements in the obtained region structure parameter that are not at the target position indicate whether the corresponding sub-region in the second region contains bar-shaped pseudo-regions. , and the bar artifact if the subregion contains the bar artifact. That is, by adjusting the position of each element in the first region structure parameter so that the shape of the region represented by the adjusted region structure parameter remains unchanged, but the angle becomes the angle of the second region, thereby obtaining the second region of the second region.
  • Regional structure parameters are, and the position of each element in the first region structure parameter so
  • the computer device determines the difference information between the sample medical image and the sample target image as sample artifact information, and the image processing model also outputs prediction artifact information, and the prediction artifact information is the image processing model output Artifact information, sample artifact information is the real artifact information of the sample medical image, then the smaller the error information between the predicted artifact information and the sample artifact information, the more accurate the image processing model will be. Therefore, the computer device determines the error information between the predicted target image and the sample target image, and the error information between the predicted artifact information and the sample artifact information, and trains the image processing model based on the determined error information, so that the error information Getting smaller and smaller, image processing models are getting more and more accurate.
  • L represents error information.
  • ⁇ n , ⁇ 1 and ⁇ 2 are compromise parameters, which are used to balance the weight of various error information.
  • X represents the sample target image
  • Y represents the sample medical image
  • I represents the non-metallic image of the sample medical image.
  • X (n) represents the nth predicted target image
  • a (n) represents the nth prediction artifact information
  • N represents the total number of iterations in the image processing model
  • n represents the nth iteration process, represents 2-norm operation
  • 1 represents 1-norm operation.
  • the computer device calls the trained image processing model to process the medical image to obtain the target image after removing metal artifacts.
  • the method provided by the embodiment of the present application is based on the structural characteristic that metal artifacts contain multiple regions with rotational symmetry.
  • the first region of the first region is first adjusted using training samples. Structural parameters, obtain the adjusted structural parameters of the first region, and then continue to adjust the structural parameters of the first region, determine the adjusted structural parameters of the region as the adjusted second region structural parameters of the second region, and convert the metal pseudo
  • the structural characteristics of the shadow are used as a priori knowledge when removing metal artifacts, and the structural characteristics that metal artifacts are multiple regions with rotational symmetry are fully considered, which can improve the effect of the image processing model in removing metal artifacts.
  • the creation process of the image processing model is:
  • non-metal image used to represent the non-metal area in the medical image.
  • H and W are the height and width of the image respectively.
  • the pixel value in the non-metal image is 0 or 1, 0 represents the metal area and 1 represents the non-metal area;
  • A is artifact information, which represents metal artifacts in medical images, and
  • represents point-by-point multiplication operations.
  • the above formula 1 can be expressed as the image shown in FIG. 4 , and the medical image 401 is determined by the medical image 402 and the metal artifact 403 .
  • the artifact information of metal artifacts can be expressed as:
  • the structural parameters are represented by convolution kernels.
  • p ⁇ p is the size of the convolution kernel. represents the location information of the metal artifact
  • L represents the total number of multiple regions in the metal artifact
  • K represents the total number of convolution kernels in each region
  • k represents the kth convolution kernel in each region
  • ⁇ l Represents the angle of the l-th area in the metal artifact
  • ⁇ l 2 ⁇ (l-1)/L
  • the convolution kernel C shown in Figure 4 above refers to the convolution kernel C shown in Figure 4 above. It can be seen that when the convolution kernel is used to represent the structure of a certain area in the metal artifact, the convolution kernel can be rotated, A convolution kernel representing the structure of another region in the metal artifact is obtained. Based on this characteristic of the metal artifact, the embodiment of the present application can express the structural parameters as:
  • a qtk and b qtk represent the adjustment parameters to be trained, Represents the rotation parameter of the l-th region, which can be expressed as:
  • p represents the size of the convolution kernel
  • h is the preset parameter, such as h is 1/4 or other values
  • x i represents the i-th row
  • x j represents the jth column.
  • ⁇ (x) represents the radial mask function
  • ( (p+1)/2)h is the reference value.
  • q ⁇ p/2 otherwise, In the same way, In the case of t ⁇ p/2, otherwise,
  • Y is a medical image
  • I is a non-metallic image. Both the medical image and the non-metallic image are known.
  • the process of removing metal artifacts in medical images is the process of determining the position information M and structural parameters C in Formula 5.
  • the target image X can be determined by obtaining the position information M and structural parameters C.
  • the structural parameter C is a characteristic of the metal artifact itself and is not related to the medical image, it can be assumed that the structural parameter C is known, and only the position information M and the target image X need to be determined. Among them, the way to determine the position information M and the target image X can be achieved by optimizing the following formula:
  • ⁇ and ⁇ are compromise parameters
  • f 1 ( ⁇ ) and f 2 ( ⁇ ) are regular functions
  • the regular function f 1 ( ⁇ ) represents the position feature
  • the position feature represents the characteristics that the position information of the metal artifact satisfies
  • the regular function f 2 ( ⁇ ) represents the image feature, which represents the characteristics satisfied by the image that does not include metal artifacts, and belongs to the prior knowledge of the image that does not include metal artifacts.
  • the position information M and the target image X can make the above formula 6 the minimum value.
  • the image processing model can be constructed according to the above formula 5. Since multiple iterations are required to determine the position information and target image, the image processing model includes Multiple location extraction networks and multiple artifact removal networks. Among them, the position extraction network M-net and the artifact removal network X-net are respectively expressed as:
  • the proximal network is represented by a positional residual network.
  • the position extraction network 501 and the artifact removal network 502 input M (n-1) output from the previous position extraction network to the position extraction network 501.
  • M (n-1 ) and After fusion the residual network is input, and the residual network outputs M (n) .
  • X (n-1) output by the previous displacement removal network is input to the artifact removal network 502.
  • the artifact removal network 502 X (n-1) and After fusion, the residual network is input, and the residual network outputs X (n) .
  • the residual network includes: convolution layer, Batch Normalization (batch normalization) layer, ReLU (linear rectification) layer, convolution layer, Batch Normalization layer and cross-link layer.
  • convolution kernel size of the convolution layer is 3*3 and the stride is 1.
  • the near-end network can also adopt other types of network structures, which are not limited in the embodiments of this application.
  • the image processing model provided by the embodiments of this application is created based on the metal artifact removal task in the field of image processing.
  • the network structure in the image processing model is determined by the structural characteristics of medical images including metal artifacts and Determined by the structural characteristics of metal artifacts, every operation in the image processing model has physical meaning.
  • the structure of the entire image processing model is equivalent to a white-box operation, with good model interpretability.
  • Figure 7 is a flow chart of another image processing method provided by an embodiment of the present application.
  • the computer device processes the medical image based on a target number of image processing sub-models in the image processing model to obtain the metal removal method.
  • the target image after artifacts, the image processing model has multiple image processing sub-models, each image processing sub-model includes a position extraction network and an artifact removal network, then the method includes the following steps.
  • the computer device calls a location extraction network to determine multiple area location information of metal artifacts in the medical image.
  • the computer device inputs the medical image into the position extraction network, performs position extraction on the medical image based on the position extraction parameters, and obtains multiple region position information.
  • Each region position information represents the position of each region in the metal artifact contained in the medical image.
  • the computer device will perform multiple iterative processes on the medical image based on multiple image processing sub-models.
  • Each image processing sub-model will output the location information and artifacts of the medical image.
  • Information and target image, the position information and target image output by the current image processing sub-model will be used as the input of the next image processing sub-model.
  • the image processing model is a trained image processing model, and the trained graphics processing model includes adjusted position extraction parameters.
  • the computer device inputs the medical image into the position extraction network of the trained image processing model, performs position extraction on the medical image based on the adjusted position extraction parameters, and obtains multiple regional position information.
  • the computer device acquires the stored plurality of reference area location information and the third reference image.
  • the reference area location information is location information preset by the computer device, for example, the location information is 0.
  • the third reference image is obtained by removing artifacts from the medical image, and the artifact removal method used to obtain the third reference image is different from the method provided in the embodiment of the present application, that is, the third reference image is The image and the target image in the embodiment of this application are obtained by removing artifacts from medical images in different ways.
  • the third reference image is an image obtained by removing artifacts from medical images by using a linear interpolation algorithm, or the third reference image is obtained by removing artifacts from medical images.
  • the reference image is an image obtained by using Gaussian filtering or other filtering algorithms to remove artifacts from medical images.
  • the computer device obtains the position information output by the previous image processing sub-model and the target image, and combines the position information output by the previous image processing sub-model and The target image output from the previous image processing sub-model determines the input to the position extraction network.
  • the computer device calls the artifact removal network to construct the first artifact information based on the multiple region position information, the adjusted first region structure parameters, and the adjusted second region structure parameters.
  • the trained image processing model includes adjusted structural parameters, and the adjusted structural parameters include adjusted first region structural parameters and adjusted second region structural parameters.
  • the adjusted structural parameters include adjusted first region structural parameters and adjusted second region structural parameters.
  • the computer device inputs the position information to the artifact removal network, and the artifact removal network is based on the adjusted structural parameters and position information, that is, based on the regional position information and the position information of each region respectively.
  • regional structural parameters determine the regional artifact information of each region, and then combine the multiple regional artifact information to form the first artifact information, which represents the predicted position of the metal artifact in the medical image. information and structural information.
  • the artifact removal network includes a convolution operation, and the structural parameter is a convolution kernel. Then the computer device convolves the structural parameters and the position information in the artifact removal network to obtain the first artifact information.
  • the structural parameters are represented by a convolution kernel
  • the computer device performs a convolution process on each regional structural parameter and the regional position information to obtain the regional artifact information.
  • the computer device calls the artifact removal network to remove artifacts from the medical image according to the first artifact information to obtain the target image.
  • the target image of the medical image can be obtained.
  • the medical image includes a metal area and a non-metal area, where the metal area is an area in the medical image where metal is located, and the non-metal area is an area in the medical image that does not include metal.
  • the computer device determines the non-metallic area in the medical image as a non-metallic image, determines first artifact information belonging to the non-metallic area in the first artifact information, and removes the first artifact information belonging to the non-metallic area from the non-metallic image. artifact information to obtain the target image.
  • the next position extraction network is the position extraction network in the next image processing sub-model of the above-mentioned artifact removal network.
  • the computer device uses the target image as the input of the next position extraction network. Based on the position extraction network, by comparing the medical image with the target image, the regional position gradient information of multiple regions is determined respectively.
  • the regional position gradient information indicates changes in the regional position information. Amplitude, that is, the extent to which the predicted regional position information changes in a more correct direction.
  • the location extraction network includes a location extraction layer.
  • the computer device calls the position extraction layer to determine the difference information between the medical image and the target image as second artifact information.
  • the second artifact information represents the image location.
  • the location information and structural information of the metal artifacts in the medical image predicted by the physical model. Compared with the first artifact information, the second artifact information is more accurate artifact information.
  • the computer device calls the position extraction layer and determines a plurality of regional position gradient information based on the first artifact information and the second artifact information.
  • the computer device determines the difference information between the first artifact information and the second artifact information as artifact difference information, based on the artifact difference information, the adjusted first region structure parameter and the adjusted third region structure parameter.
  • the second regional structure parameter determines the regional position gradient information. Since the first artifact information and the second artifact information represent the position information and structural information of the metal artifact, the artifact difference information represents the difference in position information and structural information of the metal artifact.
  • the first region structural parameters and the second region structural parameters represent the structural characteristics of the metal artifact, so based on the artifact difference information, the adjusted first region structural parameters and the adjusted second region structural parameters, it can
  • the difference in position information that is, the magnitude of change in position information, is determined to obtain the position gradient information of the region.
  • the computer device determines a non-metallic area in the medical image, where the non-metallic area refers to an area in the medical image that does not include metal.
  • the computer device determines the artifact difference information located in the non-metallic area in the artifact difference information, and determines the artifact difference information located in the non-metallic area based on the adjusted first area structural parameters, the adjusted second area structural parameters and the artifact difference information located in the non-metallic area. Multiple region position gradient information.
  • the computer device calls the next position extraction network, adjusts the multiple regional position information according to the multiple regional position gradient information, and obtains the adjusted multiple regional position information.
  • the computer device calls the next artifact removal network, removes artifacts from the medical image based on the adjusted multiple regional position information, and obtains the adjusted target image, until the target number of target images output by the image processing sub-model is obtained.
  • the last target image obtained is determined to be the image after removing metal artifacts from the medical image.
  • the image processing model includes a target number of image processing sub-models. After the artifact removal network outputs the target image, the computer device continues to use the location information, target image and medical image output by the artifact removal network as the next image of the artifact removal network. Processing the input of the sub-model, the next image processing sub-model outputs the next position information and the next target image, until the target number of image processing sub-models all perform the above metal artifact removal process, the target number of image processing sub-models The target image is output, that is, until the target image output by the last image processing sub-model in the image processing model is obtained, the entire image processing process is completed. The computer device determines the last target image obtained as the image of the medical image after removing metal artifacts.
  • the computer device calls an artifact removal network, and determines the adjusted first region based on the adjusted multiple region position information, the adjusted first region structure parameters and the adjusted second region structure parameters.
  • Artifact information Based on the adjusted first artifact information, artifacts are removed from the medical image until a target number of target images are obtained, and the last target image obtained is determined to be the image of the medical image after removing metal artifacts.
  • the process of determining the adjusted first artifact information is the same as the process of step 702, and the process of removing artifacts from the medical image based on the adjusted first artifact information is the same as the process of step 703.
  • the artifact removal network includes an image reconstruction layer.
  • the computer device calls the image reconstruction layer to construct adjusted artifact information based on the adjusted multiple region position information, the adjusted first region structural parameters and the adjusted second region structural parameters; combine the medical image with the adjusted
  • the difference information between the artifact information is determined as the first reference image; the first reference image and the target image are weighted to obtain the adjusted target image.
  • the medical image is input to the image processing model.
  • the image processing model processes the medical image to obtain the target image.
  • the embodiment of the present application only takes the image processing model including multiple image processing sub-models as an example.
  • the image processing model includes one image processing sub-model, that is, the image processing model includes In the case of one position extraction network and one artifact removal network, the target image output by the artifact removal network is used as the target image after removing metal artifacts.
  • the method provided by the embodiments of the present application when training the image processing model, is based on the structural characteristic that the metal artifact contains multiple regions with rotational symmetry, and uses the structural characteristics of the metal artifact as a priori knowledge when removing the metal artifact. , fully considering the structural characteristic that metal artifacts are multiple regions of rotational symmetry, thus improving the effect of the image processing model in removing metal artifacts when calling the image processing model to remove metal artifacts in medical images.
  • the embodiment of the present application performs multiple iterations, applies the results of this iteration to the next iteration process to continuously optimize the determined target image, and determines the target image obtained in the last iteration as a medical image after removing metal artifacts. images, which can further ensure the effect of removing metal artifacts.
  • using a deep learning network to remove metal artifacts requires obtaining a chord diagram of an image including metal artifacts and processing the chord diagram.
  • the solution provided by the embodiments of this application is a processing method based on the image domain, which eliminates the need to collect chord diagrams of medical images and reduces the cost of data acquisition.
  • Figure 9 is a flow chart of yet another image processing method provided by an embodiment of the present application, including a training process and a testing process of the image processing model.
  • the computer equipment preprocesses the sample medical images, uses the image processing model to remove metal artifacts from the preprocessed sample medical images, and iteratively trains the image processing model based on the removal results until the number of iterations reaches the target number. Save the trained image processing model.
  • the computer equipment preprocesses the medical image and loads the trained image processing model. Based on the image processing model, the preprocessed medical image removes metal artifacts and outputs the target image after removing the metal artifacts. .
  • FIG. 10 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. Referring to Figure 10, the device includes:
  • the model acquisition module 1001 is used to acquire an image processing model.
  • the image processing model includes structural parameters.
  • the structural parameters represent the structure of the metal artifact.
  • the structural parameters include a first region structural parameter of the first region and a second region of the second region. Regional structure parameters, the first region is any region in the metal artifact, and the second region is a region in the metal artifact other than the first region;
  • the model training module 1002 is used to adjust the structural parameters of the first region based on the training samples when training the image processing model to obtain the adjusted structural parameters of the first region; based on the angle between the first region and the second region difference, adjust the adjusted first regional structural parameter, and determine the obtained regional structural parameter as the adjusted second regional structural parameter;
  • the trained image processing model is used to remove metal artifacts in any medical image based on the adjusted structural parameters.
  • the device provided by the embodiment of the present application is based on the structural characteristic that the metal artifact contains multiple regions with rotational symmetry.
  • the first region of the first region is first adjusted using the training sample.
  • Structural parameters obtain the adjusted structural parameters of the first region, and then continue to adjust the structural parameters of the first region, determine the adjusted structural parameters of the region as the adjusted second region structural parameters of the second region, and convert the metal pseudo
  • the structural characteristics of the shadow are used as a priori knowledge when removing metal artifacts, and the structural characteristics that metal artifacts are multiple regions with rotational symmetry are fully considered, which can improve the effect of the image processing model in removing metal artifacts.
  • the first region structure parameter is represented by the product of the weight coefficient and the first original region structure parameter of the first region; the model training module 1002 is used to adjust the image processing model based on the training sample when training the image processing model.
  • the weight coefficient and the first original region structure parameter are used to obtain the adjusted weight coefficient and the adjusted first original region structure parameter.
  • the adjusted first region structure parameter is determined by the adjusted weight coefficient and the adjusted first original region structure.
  • each area in the metal artifact includes at least one strip artifact, and the first original area structural parameter of the first area is a matrix, and the matrix is used to represent the first area; the model training Module 1002, for:
  • the target value indicates that the target position does not correspond to any sub-area in the first area.
  • the non-target positions in the adjusted matrix are the same as those in the first area.
  • the sub-regions correspond respectively.
  • the non-target position refers to other positions in the adjusted matrix except the target position where the target value is located.
  • the elements at the non-target position indicate whether the corresponding sub-region in the first region contains bar artifacts. , and bar artifacts in the case of subregions containing bar artifacts.
  • the model training module 1002 is used to determine the rotation parameter based on the angle difference between the first region and the second region; and adjust the adjusted first region structural parameter based on the rotation parameter,
  • the obtained regional structure parameters are determined as adjusted second regional structure parameters.
  • the first region structure parameter is a matrix, and elements at non-target positions in the matrix indicate whether the corresponding sub-region in the first region contains strip artifacts, and whether the sub-region contains bar artifacts.
  • the bar artifact when the area contains the bar artifact, and the non-target position refers to other positions in the matrix except the target position where the target value is located; the model training module 1002 is used to based on the Rotate the parameter, adjust the position of the element in the adjusted first region structure parameter, so that the element at the non-target position in the obtained region structure parameter indicates whether the corresponding sub-region in the second region contains bar artifacts, and the bar artifact if the subregion contains the bar artifact.
  • the image processing model also includes position extraction parameters
  • the model training module 1002 is also used to adjust the position extraction parameters based on the training samples when training the image processing model; wherein, the image processing after training The model is used to remove metal artifacts in any medical image based on adjusted structural parameters and adjusted position extraction parameters.
  • the device also includes:
  • the image processing module 1003 is used to call the trained image processing model, perform position extraction on the medical image based on the position extraction parameters, and obtain multiple regional position information.
  • Each regional position information represents each metal artifact contained in the medical image. the positions of the regions; constructing the first artifact information based on the position information of the multiple regions, the adjusted first region structural parameters and the adjusted second region structural parameters; and performing artifacts on the medical image based on the first artifact information Remove to get the target image.
  • the image processing module 1003 includes:
  • a position gradient determination unit configured to determine the regional position gradient information of the multiple regions by comparing the medical image with the target image, where the regional position gradient information indicates the change amplitude of the regional position information
  • An artifact removal unit configured to determine the adjusted first artifact information based on the adjusted plurality of area position information, the adjusted first area structure parameter, and the adjusted second area structure parameter, based on the After adjusting the first artifact information, artifact removal is performed on the medical image until a target number of target images are obtained, and the last target image obtained is determined as the medical image after removing the metal artifact.
  • the position gradient determination unit is used for:
  • regional position gradient information of the plurality of regions is determined respectively.
  • the position gradient determination unit is used for:
  • regional position gradient information of the plurality of regions is determined respectively.
  • the image processing model includes a location extraction network and an artifact removal network; referring to Figure 11, the device also includes: an image processing module 1003 for calling the location extraction network to perform location extraction on medical images. , obtain the regional position information of multiple regions in the metal artifact; call the artifact removal network to determine the first artifact based on the multiple regional position information, the adjusted first region structural parameters and the adjusted second region structural parameters Information, based on the first artifact information, artifacts are removed from the medical image to obtain the target image.
  • an image processing module 1003 for calling the location extraction network to perform location extraction on medical images. , obtain the regional position information of multiple regions in the metal artifact; call the artifact removal network to determine the first artifact based on the multiple regional position information, the adjusted first region structural parameters and the adjusted second region structural parameters Information, based on the first artifact information, artifacts are removed from the medical image to obtain the target image.
  • the image processing device provided in the above embodiments processes images
  • only the division of the above functional modules is used as an example.
  • the above function allocation can be completed by different functional modules as needed, that is, The internal structure of the computer equipment is divided into different functional modules to complete all or part of the functions described above.
  • the image processing device provided by the above embodiments and the image processing method embodiments belong to the same concept. Please refer to the method embodiments for the specific implementation process, which will not be described again here.
  • Embodiments of the present application also provide a computer device.
  • the computer device includes a processor and a memory. At least one computer program is stored in the memory. The at least one computer program is loaded and executed by the processor to implement the image processing of the above embodiments. The operation performed by the method.
  • FIG. 12 is a schematic structural diagram of a terminal 1200 provided by an embodiment of the present application.
  • the terminal 1200 can be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic Video Expert compresses standard audio levels 4) player, laptop or desktop computer.
  • the terminal 1200 may also be called a user equipment, a portable terminal, a laptop terminal, a desktop terminal, and other names.
  • the terminal 1200 includes: a processor 1201 and a memory 1202.
  • the processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 1201 can adopt at least one hardware form among DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), and PLA (Programmable Logic Array, programmable logic array).
  • DSP Digital Signal Processing, digital signal processing
  • FPGA Field-Programmable Gate Array, field programmable gate array
  • PLA Programmable Logic Array, programmable logic array
  • the processor 1201 can also include a main processor and a co-processor.
  • the main processor is a processor used to process data in the wake-up state, also called CPU (Central Processing Unit, central processing unit); the co-processor is A low-power processor used to process data in standby mode.
  • Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash memory storage devices. In some embodiments, the non-transitory computer-readable storage medium in the memory 1202 is used to store at least one computer program, and the at least one computer program is used to be executed by the processor 1201 to implement the methods provided by the method embodiments in this application. Image processing methods.
  • the terminal 1200 optionally further includes: a peripheral device interface 1203 and at least one peripheral device.
  • the processor 1201, the memory 1202 and the peripheral device interface 1203 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 1203 through a bus, a signal line, or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 1204, a display screen 1205, a camera assembly 1206, an audio circuit 1207, and a power supply 1208.
  • the peripheral device interface 1203 may be used to connect at least one I/O (Input/Output) related peripheral device to the processor 1201 and the memory 1202 .
  • the processor 1201, the memory 1202, and the peripheral device interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1201, the memory 1202, and the peripheral device interface 1203 or Both can be implemented on separate chips or circuit boards.
  • the radio frequency circuit 1204 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals. Radio frequency circuit 1204 communicates with communication networks and other communication devices through electromagnetic signals. The radio frequency circuit 1204 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuit 1204 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a user identity module card, and the like. Radio frequency circuitry 1204 can communicate with other terminals through at least one wireless communication protocol.
  • RF Radio Frequency, radio frequency
  • the wireless communication protocol includes but is not limited to: World Wide Web, metropolitan area network, intranet, mobile communication networks of all generations (2G, 3G, 4G and 5G), wireless LAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.
  • the radio frequency circuit 1204 may also include NFC (Near Field Communication) related circuits, which is not limited in this application.
  • the display screen 1205 is used to display UI (User Interface, user interface).
  • the UI includes graphics, text, icons, videos, and any combination thereof.
  • display screen 1205 is a touch display screen, display screen 1205 has the ability to collect touch signals on or above the surface of display screen 1205 .
  • the touch signal may be input to the processor 1201 as a control signal for processing.
  • the display screen 1205 is also used to provide virtual buttons and/or virtual keyboard, also called soft buttons and/or soft keyboard.
  • the display screen 1205 may be one display screen 1205, which is provided on the front panel of the terminal 1200; in other embodiments, there may be at least two display screens 1205, which are respectively provided on different surfaces of the terminal 1200 or have a folding design; In other embodiments, the display screen 1205 may be a flexible display screen disposed on a curved surface or a folding surface of the terminal 1200. Even, the display screen 1205 can also be set in a non-rectangular irregular shape, that is, a special-shaped screen.
  • the display screen 1205 can be made of LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light-emitting diode) and other materials.
  • the camera component 1206 is used to capture images or videos.
  • the camera assembly 1206 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • there are at least two rear cameras one of which is a main camera, a depth-of-field camera, a wide-angle camera, and a telephoto camera, so as to realize the integration of the main camera and the depth-of-field camera to realize the background blur function.
  • camera assembly 1206 also includes a flash.
  • the flash can be a single color temperature flash or a dual color temperature flash. Dual color temperature flash refers to a combination of warm light flash and cold light flash, which can be used for light compensation under different color temperatures.
  • Audio circuitry 1207 may include a microphone and speakers.
  • the microphone is used to collect sound waves from the user and the environment, convert the sound waves into electrical signals and input them to the processor 1201 for processing, or input them to the radio frequency circuit 1204 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional collection microphone.
  • the speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves.
  • the loudspeaker can be a traditional membrane loudspeaker or a piezoelectric ceramic loudspeaker.
  • audio circuitry 1207 may also include a headphone jack.
  • the power supply 1208 is used to power various components in the terminal 1200.
  • Power source 1208 may be AC, DC, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. Wired rechargeable batteries are batteries that are charged through wired lines, and wireless rechargeable batteries are batteries that are charged through wireless coils.
  • the rechargeable battery can also be used to support fast charging technology.
  • FIG. 12 does not constitute a limitation on the terminal 1200, and may include more or fewer components than shown, or combine certain components, or adopt different component arrangements.
  • the computer device is provided as a server.
  • Figure 13 is a schematic structural diagram of a server provided by an embodiment of the present application.
  • the server 1300 may vary greatly due to different configurations or performance, and may include one or more processors (Central Processing Units, CPU) 1301 and a Or one or more memories 1302, wherein at least one computer program is stored in the memory 1302, and the at least one computer program is loaded and executed by the processor 1301 to implement the methods provided by the above method embodiments.
  • the server can also have components such as wired or wireless network interfaces, keyboards, and input and output interfaces to facilitate input and output.
  • the server can also include other components for implementing device functions, which will not be described again here.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores at least one computer program.
  • the at least one computer program is loaded and executed by a processor to implement the image processing method of the above embodiments. The operation performed.
  • An embodiment of the present application also provides a computer program product.
  • the computer program product includes a computer program.
  • the operations performed by the image processing method of the above embodiment are implemented.
  • the computer program involved in the embodiments of the present application may be deployed and executed on one computer device, or executed on multiple computer devices located in one location, or distributed in multiple locations and communicated through Executed on multiple computer devices interconnected by a network, multiple computer devices distributed in multiple locations and interconnected through a communication network can form a blockchain system.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium can be read-only memory, magnetic disk or optical disk, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种图像处理方法、装置、计算机设备及存储介质,属于计算机技术领域。该方法包括:获取图像处理模型,图像处理模型包含结构参数,结构参数表示金属伪影的结构,结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数(201);在训练图像处理模型时,基于训练样本调整第一区域结构参数,得到调整后的第一区域结构参数;基于第一区域和第二区域之间的角度差,调整该调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数,训练后的图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影(202)。

Description

图像处理方法、装置、计算机设备及存储介质
本申请要求于2022年04月19日提交、申请号为202210409315.6、发明名称为“图像处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机技术领域,特别涉及一种图像处理方法、装置、计算机设备及存储介质。
背景技术
计算机断层扫描(Computed Tomography,CT)能够无损检测到人体内的组织器官结构,因此在医学领域具有广泛的应用。在采用计算机断层扫描采集CT图像时,受到人体内金属植入物的影响,采集到的CT图像中会出现金属伪影,影响CT图像的质量。相关技术中,利用图像处理模型去除CT图像中的金属伪影,但是目前的图像处理模型去除金属伪影的效果较差。
发明内容
本申请实施例提供了一种图像处理方法、装置、计算机设备及存储介质,提高了去除金属伪影的效果。所述技术方案如下:
一方面,提供了一种图像处理方法,所述方法包括:
计算机设备获取图像处理模型,所述图像处理模型包含结构参数,所述结构参数表示金属伪影的结构,所述结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,所述第一区域为所述金属伪影中的任一区域,所述第二区域是所述金属伪影中除所述第一区域之外的区域;
所述计算机设备在训练所述图像处理模型时,基于训练样本调整所述第一区域结构参数,得到调整后的第一区域结构参数;基于所述第一区域和所述第二区域之间的角度差,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数;
其中,训练后的所述图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影。
另一方面,提供了一种图像处理装置,所述装置包括:
模型获取模块,用于获取图像处理模型,所述图像处理模型包含结构参数,所述结构参数表示金属伪影的结构,所述结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,所述第一区域为所述金属伪影中的任一区域,所述第二区域是所述金属伪影中除所述第一区域之外的区域;
模型训练模块,用于在训练所述图像处理模型时,基于训练样本调整所述第一区域结构参数,得到调整后的第一区域结构参数;基于所述第一区域和所述第二区域之间的角度差,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数;
其中,训练后的所述图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影。
在一种可能实现方式中,所述第一区域结构参数由权重系数和所述第一区域的第一原始区域结构参数的乘积表示;所述模型训练模块,用于在训练所述图像处理模型时,基于所述 训练样本调整所述权重系数和所述第一原始区域结构参数,得到调整后的权重系数和调整后的第一原始区域结构参数,所述调整后的第一区域结构参数由所述调整后的权重系数和所述调整后的第一原始区域结构参数的乘积表示。
在另一种可能实现方式中,所述金属伪影中的每个所述区域包括至少一个条形伪影,所述第一区域的第一原始区域结构参数为矩阵,所述矩阵用于表示第一区域;
所述模型训练模块,用于:
将所述矩阵中不小于参考数值的元素调整为目标数值,所述目标数值表示所在的目标位置不对应第一区域中的任一子区域,调整后的矩阵中的非目标位置与所述第一区域中的多个子区域分别对应,所述非目标位置是指所述调整后的矩阵中除所述目标数值所在的目标位置之外的其他位置,所述非目标位置上的元素表示所述第一区域中对应的子区域中是否包含条形伪影,以及在所述子区域中包含所述条形伪影的情况下的所述条形伪影。
在另一种可能实现方式中,所述模型训练模块,用于:
基于所述第一区域和所述第二区域之间的角度差,确定旋转参数;
基于所述旋转参数,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为所述调整后的第二区域结构参数。
在另一种可能实现方式中,所述第一区域结构参数为矩阵,所述矩阵中的非目标位置上的元素表示所述第一区域中对应的子区域中是否包含条形伪影,以及在所述子区域中包含所述条形伪影的情况下的所述条形伪影,所述非目标位置是指所述矩阵中除目标数值所在的目标位置之外的其他位置;所述模型训练模块,用于:
基于所述旋转参数,调整所述调整后的第一区域结构参数中元素的位置,以使得到的第二区域结构参数中所述非目标位置的元素表示所述第二区域中对应的子区域中是否包含条形伪影,以及在所述子区域中包含该条形伪影的情况下的所述条形伪影。
在另一种可能实现方式中,所述图像处理模型还包含位置提取参数,所述模型训练模块,还用于在训练所述图像处理模型时,基于所述训练样本调整所述位置提取参数;
其中,训练后的所述图像处理模型用于基于所述调整后的结构参数和调整后的位置提取参数去除任一医学图像中的金属伪影。
在另一种可能实现方式中,所述装置还包括:
图像处理模块,用于调用训练后的所述图像处理模型,基于所述位置提取参数,对所述医学图像进行位置提取,得到多个区域位置信息,每个所述区域位置信息表示所述医学图像包含的所述金属伪影中的每个区域的位置;基于多个区域位置信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,构建第一伪影信息;基于所述第一伪影信息,对所述医学图像进行伪影去除,得到目标图像。
在另一种可能实现方式中,所述图像处理模块,包括:
位置梯度确定单元,用于通过将所述医学图像与所述目标图像进行对比,分别确定所述多个区域的区域位置梯度信息,所述区域位置梯度信息指示区域位置信息的变化幅度;
位置信息确定单元,用于基于多个区域位置梯度信息分别对所述多个区域位置信息进行调整,得到调整后的多个区域位置信息;
伪影去除单元,用于基于所述调整后的多个区域位置信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,确定调整后的第一伪影信息,基于所述调整后的第一伪影信息,对所述医学图像进行伪影去除,直至得到目标数量个目标图像,将得到的最后一个目标图像确定为所述医学图像去除所述金属伪影后的图像。
在另一种可能实现方式中,所述位置梯度确定单元,用于:
将所述医学图像和所述目标图像之间的差异信息,确定为第二伪影信息;
基于所述第一伪影信息和所述第二伪影信息,分别确定所述多个区域的区域位置梯度信息。
在另一种可能实现方式中,所述位置梯度确定单元,用于:
将所述第一伪影信息和所述第二伪影信息之间的差异信息,确定为伪影差异信息;
基于所述伪影差异信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,分别确定所述多个区域的区域位置梯度信息。
在另一种可能实现方式中,所述图像处理模型包括位置提取网络和伪影去除网络;所述装置还包括:
图像处理模块,用于调用所述位置提取网络,对所述医学图像进行位置提取,得到所述金属伪影中多个区域的区域位置信息;调用所述伪影去除网络,基于多个区域位置信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,确定所述第一伪影信息,基于所述第一伪影信息,对所述医学图像进行伪影去除,得到目标图像。
另一方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条计算机程序,所述至少一条计算机程序由所述处理器加载并执行,以实现如上述方面所述的图像处理方法所执行的操作。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述至少一条计算机程序由处理器加载并执行,以实现如上述方面所述的图像处理方法所执行的操作。
另一方面,提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现上述方面所述的图像处理方法所执行的操作。
本申请实施例提供的技术方案,基于金属伪影包含旋转对称的多个区域这一结构特性,在调整图像处理模型中金属伪影的结构参数时,首先利用训练样本调整第一区域的第一区域结构参数,得到调整后的第一区域结构参数,然后继续对该第一区域结构参数进行调整,将调整得到的区域结构参数确定为第二区域的调整后的第二区域结构参数,将金属伪影在结构上的特性作为金属伪影去除时的先验知识,充分考虑了金属伪影是旋转对称的多个区域这一结构特性,能够提高图像处理模型去除金属伪影的效果,同时由于只需基于训练样本调整第一区域结构参数,无需直接对第二区域结构参数本身进行调整,仅需利用对第一区域结构参数的调整结果,即可得到调整后的第二区域结构参数,能够提高模型的训练效率。
附图说明
图1是本申请实施例提供的一种实施环境的示意图;
图2是本申请实施例提供的一种图像处理方法的流程图;
图3是本申请实施例提供的另一种图像处理方法的流程图;
图4是本申请实施例提供的一种医学图像的示意图;
图5是本申请实施例提供的一种模型结构的示意图;
图6是本申请实施例提供的另一种模型结构的示意图;
图7是本申请实施例提供的又一种图像处理方法的流程图;
图8是本申请实施例提供的一种图像处理过程的示意图;
图9是本申请实施例提供的再一种图像处理方法的流程图;
图10是本申请实施例提供的一种图像处理装置的结构示意图;
图11是本申请实施例提供的另一种图像处理装置的结构示意图;
图12是本申请实施例提供的一种终端的结构示意图;
图13是本申请实施例提供的一种服务器的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种概念,但除非 特别说明,这些概念不受这些术语限制。这些术语仅用于将一个概念与另一个概念区分。举例来说,在不脱离本申请的范围的情况下,可以将第一排列顺序称为第二排列顺序,将第二排列顺序称为第一排列顺序。
本申请所使用的术语“至少一个”、“多个”、“每个”、“任一”等,至少一个包括一个、两个或两个以上,多个包括两个或两个以上,每个是指对应的多个中的每一个,任一是指多个中的任意一个。举例来说,多个角度包括3个角度,而每个角度是指这3个角度中的每一个角度,任一是指这3个角度中的任意一个,可以是第一个,可以是第二个,也可以是第三个。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习、自动驾驶、智慧交通等几大方向。
计算机视觉技术(Computer Vision,CV)是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取信息的人工智能系统。计算机视觉技术通常包括图像处理、图像识别、图像语义理解、图像检索、OCR(Optical Character Recognition,光学字符识别)、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D(3 Dimensions,三维)技术、虚拟现实、增强现实、同步定位与地图构建、自动驾驶、智慧交通等技术,还包括常见的人脸识别、指纹识别等生物特征识别技术。
机器学习(Machine Learning,ML)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、示教学习等技术。
随着人工智能技术研究和进步,人工智能技术在多个领域展开研究和应用,例如常见的智能家居、智能穿戴设备、虚拟助理、智能音箱、智能营销、无人驾驶、自动驾驶、无人机、机器人、智能医疗、智能客服、车联网、自动驾驶、智慧交通等,相信随着技术的发展,人工智能技术将在更多的领域得到应用,并发挥越来越重要的价值。
本申请实施例提供的图像处理方法,利用人工智能中的计算机视觉技术以及机器学习等技术,能够对包括金属伪影的医学图像进行伪影处理,得到去除金属伪影之后的图像。
本申请实施例提供的图像处理方法,能够用于计算机设备中。可选地,该计算机设备为终端或服务器。可选地,该服务器是独立的物理服务器,或者,是多个物理服务器构成的服务器集群或者分布式系统,或者,是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN(Content Delivery Network,内容分发网络)、以及大数据和人工智能平台等基础云计算服务的云服务器。可选地,该终端是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。
在一种可能实现方式中,本申请实施例所涉及的计算机程序可被部署在一个计算机设备上执行,或者在位于一个地点的多个计算机设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算机设备上执行,分布在多个地点且通过通信网络互连的多个计算机 设备能够组成区块链系统。
在一种可能实现方式中,本申请实施例中用于训练图像处理模型的计算机设备是区块链系统中的节点,该节点能够将训练的图像处理模型存储在区块链中,之后该节点或者该区块链中的其他设备对应的节点可基于该图像处理模型,去除图像中的金属伪影。
图1是本申请实施例提供的一种实施环境的示意图。该实施环境包括终端101和服务器102。终端101和服务器102之间通过无线或有线网络连接。终端101上安装由服务器102提供服务的目标应用,终端101能够通过目标应用实现例如数据传输、图像处理等功能。例如,目标应用为图像处理应用,该图像处理应用能够去除CT图像中的金属伪影。
其中,服务器102训练图像处理模型,该图像处理模型用于去除图像中的金属伪影,服务器102将训练好的图像处理模型发送给终端101,终端101存储接收到的图像处理模型,后续能够基于该图像处理模型,对包括金属伪影的任一医学图像进行处理,得到去除金属伪影后的图像。
本申请实施例提供的图像处理方法,能够应用于多种场景。例如,在医学领域中,对患者进行扫描能够得到患者的CT图像,医生根据患者的CT图像以及患者的其他相关信息,能够确定患者的状态。但是如果在对患者进行扫描时患者的身体有金属植入物,则CT图像中会出现金属伪影,这些金属伪影不仅会降低CT图像的质量,而且还会对医生的诊断过程产生不利影响。因此,能够采用本申请实施例提供的图像处理方法,去除CT图像中的金属伪影,提高CT图像的质量,从而在医生进行临床诊断过程中提供准确的辅助性信息。
图2是本申请实施例提供的一种图像处理方法的流程图。本申请实施例的执行主体为计算机设备。参见图2,该方法包括以下步骤:
201、计算机设备获取图像处理模型,该图像处理模型包含结构参数,该结构参数表示金属伪影的结构,该结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,该第一区域为该金属伪影中的任一区域,该第二区域是该金属伪影中除该第一区域之外的区域。
其中,图像处理模型用于去除医学图像中的金属伪影,金属伪影是指生成医学图像的过程中金属所造成的噪声信息。该医学图像中包括金属伪影以及导致出现金属伪影的金属。例如,医学图像是通过计算机断层扫描目标对象所得到的CT图像,CT图像中的金属伪影是由于目标对象体内或体表的金属对X射线的吸收和反射等,导致在金属周围以及整个CT图像中所产生的噪声。
本申请实施例中,金属伪影为旋转对称的条状结构,该金属伪影包含旋转对称的多个区域,每个区域中包含至少一个条状,该第一区域与第二区域是旋转对称。金属伪影的结构是金属伪影本身的特性,对于不同的医学图像来说,不同的医学图像中的金属伪影的结构是相同的。因此,对于图像处理模型来说,在图像处理模型中设置结构参数,通过对该结构参数进行训练,从而使该结构参数能够准确表示金属伪影的结构。其中,第一区域结构参数表示金属伪影中的第一区域的结构,第二区域结构参数表示金属伪影中的第二区域的结构。
202、计算机设备在训练该图像处理模型时,基于训练样本调整第一区域结构参数,得到调整后的第一区域结构参数,基于第一区域的角度和第二区域的角度,调整该调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数,训练后的图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影。
其中,训练样本包括样本医学图像和样本目标图像,该样本医学图像是包含金属伪影的图像,样本目标图像是该样本医学图像去除金属伪影后的图像。
图像处理模型的结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,计算机设备首先利用训练样本调整第一区域结构参数,得到调整后的第一区域结构参数。另外,由于金属伪影中的多个区域是旋转对称的,即该多个区域的形状是相同的,第一 区域和第二区域之间存在角度差,也即是将第一区域旋转该角度差,即可得到第二区域。因此,在调整了第一区域的第一区域结构参数的情况下,能够利用该第一区域与第二区域是旋转对称的这一结构特性,基于第一区域和第二区域之间的角度差,调整该调整后的第一区域结构参数,则得到的区域结构参数即可作为第二区域的调整后的第二区域结构参数。从而使计算机设备在训练图像处理模型的过程中,能够先调整第一区域结构参数,再利用调整后的第一区域结构参数,确定调整后的第二区域结构参数,从而提高了训练效率。
其中,训练后的图像处理模型包括调整后的结构参数,调整后的结构参数包括调整后的第一区域结构参数和调整后的第二区域结构参数,这里所说的调整后的第一区域结构参数是指基于训练样本对第一区域结构参数进行调整所得到的区域结构参数,这里所说的调整后的第二区域结构参数是指对调整好后的第一区域结构参数再次进行调整所得到的区域结构参数。
本申请实施例提供的方法,基于金属伪影包含旋转对称的多个区域这一结构特性,在调整图像处理模型中金属伪影的结构参数时,首先利用训练样本调整第一区域的第一区域结构参数,得到调整后的第一区域结构参数,然后继续对该第一区域结构参数进行调整,将调整得到的区域结构参数确定为第二区域的调整后的第二区域结构参数,将金属伪影在结构上的特性作为金属伪影去除时的先验知识,充分考虑了金属伪影是旋转对称的多个区域这一结构特性,能够提高图像处理模型去除金属伪影的效果,同时由于只需基于训练样本调整第一区域结构参数,无需直接对第二区域结构参数本身进行调整,仅需利用对第一区域结构参数的调整结果,即可得到调整后的第二区域结构参数,能够提高模型的训练效率。
图3是本申请实施例提供的另一种图像处理方法的流程图。本申请实施例的执行主体为计算机设备,参见图3,该方法包括以下步骤。
301、计算机设备获取图像处理模型,该图像处理模型包含结构参数和位置提取参数。
其中,图像处理模型用于去除医学图像中的金属伪影,该图像处理模型是未训练的模型,或者是已经过一次或者多次训练的模型。金属伪影是指生成医学图像的过程中金属所造成的噪声信息,结构参数表示金属伪影的结构。金属伪影的结构属于该金属伪影本身的特性,对于不同的医学图像来说,不同的医学图像中的金属伪影的结构是相同的,因此本申请实施例中,在图像处理模型中设置结构参数,通过对图像处理模型进行训练,得到结构参数。
并且,由于金属伪影为旋转对称的条状结构,因此能够将该金属伪影划分为旋转对称的多个区域,每个区域包含至少一个条状,对于这多个区域来说,任两个区域的形状是相似的,不同的是在金属伪影中的角度。因此,对于图像处理模型来说,在图像处理模型中设置结构参数时,设置第一区域的第一区域结构参数,而第二区域的第二区域结构参数能够根据第一区域和第二区域之间的角度差,通过调整第一区域结构参数获得。其中,第一区域结构参数表示金属伪影中的第一区域的结构,第二区域结构参数表示金属伪影中的第二区域的结构。
在一种可能实现方式中,将金属伪影划分为旋转对称的多个区域,并确定该金属伪影中的参考条形,该参考线条是指金属伪影中的任一条形,在多个区域中分别确定目标条形,且每个区域中的目标条形的位置是对应的,例如,第一区域中的目标条形是第一区域中最右侧的条形,同样的,第二区域中的目标条形也是第二区域中最右侧的条形。然后分别将每个目标条形与参考条形之间的夹角,确定为每个区域的角度。将第一区域的角度与第二区域的角度之间的差值,确定为第一区域与第二区域之间的角度差。
在另一种可能实现方式中,基于金属伪影中划分的区域的总数目,采用下述公式,确定每个区域的角度,然后将第一区域的角度与第二区域的角度之间的差值,确定为第一区域与第二区域之间的角度差。
θl=2π(l-1)/L
其中,θl表示金属伪影中第l个区域的角度,L为金属伪影中划分的区域的总数目。例如,L为8。
当然,计算机设备还能够采用其他方式确定每个区域的角度,本申请实施例不做限制。
其中,位置提取参数用于提取医学图像中金属伪影的位置信息,不同的医学图像中,金属位置所在的位置可能存在区别,因此在图像处理模型中设置位置提取参数,从医学图像中提取金属位置的位置信息。
在一种可能实现方式中,图像处理模型包括位置提取网络和伪影去除网络,该位置提取网络的网络参数包括该位置提取参数,该位置提取网络用于利用该位置提取参数提取医学图像中金属伪影的位置信息。该伪影去除网络的网络参数包括该结构参数,该伪影去除网络用于基于位置信息和结构参数,去除医学图像中的金属伪影。
在另一种可能实现方式中,图像处理模型包括多个图像处理子模型,每个图像处理子模型包括位置提取网络和伪影去除网络。
302、计算机设备获取训练样本。
其中,训练样本包括样本医学图像和样本目标图像,该样本医学图像是包含金属伪影的图像,样本目标图像是该样本医学图像去除金属伪影后的图像。
在一种可能实现方式中,计算机设备直接获取样本目标图像,该样本目标图像是不包括样本金属伪影的图像。计算机设备获取伪影信息,该伪影信息包括金属的位置信息以及金属的结构信息。计算机设备采用数据仿真方法,根据样本目标图像和伪影信息以及CT设备的成像参数,合成包括金属伪影的样本医学图像,然后计算机设备将该样本医学图像以及样本目标图像,确定为训练样本。
在另一种可能实现方式中,计算机设备获取样本医学图像,然后采用除本申请中的图像处理模型之外的其他方式,对该样本医学图像进行伪影去除,得到样本目标图像。
可选地,对于CT图像来说,计算机设备将训练样本中的图像的像素值进行调整,将每个像素点的像素值控制在[0,1]范围内,然后将每个像素点的像素值至转换到[0,255]的范围内。
可选地,计算机设备将训练样本的图像裁剪至目标尺寸,然后随机对每个图像进行水平镜像翻转或者垂直镜像翻转,从而提高训练样本中的图像的多样性。
303、计算机设备在训练图像处理模型时,基于训练样本调整第一区域结构参数和位置提取参数,得到调整后的第一区域结构参数和调整后的位置提取参数,基于第一区域和第二区域之间的角度差,调整该调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数。
计算机设备调用图像处理模型,对样本医学图像进行处理,得到预测目标图像,基于预测目标图像和样本目标图像,调整第一区域结构参数和位置提取参数。
本申请实施例中,由于第一区域和第二区域是旋转对称的,因此只需基于训练样本调整第一区域的第一区域结构参数,再基于第一区域和第二区域之间的角度差,继续调整该调整后的第一区域结构参数,即可得到第二区域的调整后的第二区域结构参数。
在一种可能实现方式中,该结构参数包括一个权重系数和多个原始区域结构参数,该权重系数和一个该原始区域结构参数的乘积代表一个区域的区域结构参数,即对于第一区域结构参数来说,该第一区域结构参数由权重系数和第一区域的第一原始区域结构参数的乘积表示。那么,在训练该图像处理模型时,基于该训练样本调整该权重系数和该第一区域的第一原始区域结构参数,得到调整后的权重系数和调整后的第一原始区域结构参数,然后基于调整后的权重系数和调整后的第一原始区域结构参数,确定调整后的第一区域结构参数,调整后的第一区域结构参数由该调整后的权重系数和调整后的第一原始区域结构参数的乘积表示。对于第二区域结构参数来说,基于调整后的权重系数、第一区域和第二区域之间的角度差、调整后的第一原始区域结构参数,确定调整后的第二区域结构参数。例如,第二区域结构参数由权重系数和第二区域的第二原始区域结构参数的乘积表示。计算机设备基于第一区域和第二区域之间的角度差,对调整后的权重系数和调整后的第一原始区域结构参数再次进行调整,得到再次调整后的权重系数和再次调整后的第一原始区域结构参数,将再次调整后的第一原始区域结构参数确定为调整后的第二原始区域结构参数,调整后的第二区域结构参数由再次调整后的权重系数和调整后的第二原始区域结构参数的乘积表示。
在一种可能实现方式中,该金属伪影中的每个区域包括至少一个条形伪影,条形伪影是指形状为条形的伪影。该第一区域的第一原始区域结构参数为矩阵,该矩阵用于表示第一区域;在得到该矩阵之后,将该矩阵中不小于参考数值的元素调整为目标数值,该目标数值表示所在的目标位置不对应第一区域中的任一子区域,调整后的矩阵中的非目标位置与该第一区域中的多个子区域分别对应,非目标位置是指调整后的矩阵中除目标数值所在的目标位置之外的其他位置,其中,该矩阵中的非目标位置上的元素表示该第一区域中对应的子区域中是否包含条形伪影,以及在该子区域中包含该条形伪影的情况下的该条形伪影。其中,参考数值是基于图像处理模型中表示区域结构参数的卷积核的尺寸和预设数值确定的,例如,卷积核的尺寸为p*p,预设数值为h,则该参考数值为((p+1)/2)h,其中,h为大于0的任一数值,p为奇数。目标数值为预先设置的数值,例如,目标数值为0或其他数值。
在一种可能实现方式中,计算机设备基于第一区域和第二区域之间的角度差,确定旋转参数,该旋转参数用于调整第一区域的第一区域结构参数。例如,该旋转参数为其中,θl表示第二区域与第一区域之间的角度差,在第一区域的角度为0的情况下,该角度差即为第二区域的角度。然后基于该旋转参数,调整该调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数。
在一种可能实现方式中,第一区域结构参数为矩阵,矩阵中的非目标位置上的元素表示第一区域中对应的子区域中是否包含条形伪影,以及在子区域中包含该条形伪影的情况下该条形伪影的形状,非目标位置所示矩阵中除目标数值所在的目标位置之外的其他位置。计算机设备基于该旋转参数,调整该调整后的第一区域结构参数中元素的位置,以使得到的区域结构参数中非目标位置的元素表示第二区域中对应的子区域中是否包含条形伪影,以及在子区域中包含该条形伪影的情况下的该条形伪影。即通过调整第一区域结构参数中各个元素的位置,以使调整后得到的区域结构参数所表示的区域的形状不变,但是角度变为第二区域的角度,从而得到第二区域的第二区域结构参数。
在一种可能实现方式中,预测目标图像与样本目标图像之间的误差信息越小,该图像处理模型越准确。计算机设备确定预测目标图像与样本目标图像之间的误差信息,根据确定的误差信息,训练该图像处理模型,以使该误差信息越来越小,图像处理模型越来越准确。
在一种可能实现方式中,计算机设备将样本医学图像和样本目标图像之间的差异信息确定为样本伪影信息,图像处理模型还会输出预测伪影信息,预测伪影信息是图像处理模型输出的伪影信息,样本伪影信息是样本医学图像的真实伪影信息,则预测伪影信息与样本伪影信息之间的误差信息越小,图像处理模型也越准确。因此计算机设备分别确定预测目标图像与样本目标图像之间的误差信息,以及预测伪影信息与样本伪影信息之间的误差信息,根据确定的误差信息,训练该图像处理模型,以使误差信息越来越小,图像处理模型越来越准确。
例如,计算机设备采用以下公式,确定误差信息:
其中,L表示误差信息。μn、λ1和λ2为折衷参数,是用来平衡各项误差信息的权重。X表示样本目标图像,Y表示样本医学图像,I表示样本医学图像的非金属图像。X(n)表示第n个预测目标图像,A(n)表示第n个预测伪影信息,N表示图像处理模型中的总迭代次数,n表示第n次迭代过程,表示2范数运算,||·||1表示1范数运算。
304、计算机设备调用训练后的图像处理模型,对医学图像进行处理,得到去除金属伪影后的目标图像。
调用图像处理模型去除医学图像中的金属伪影的过程参见图7的实施例,在此不再赘述。
本申请实施例提供的方法,基于金属伪影包含旋转对称的多个区域这一结构特性,在调整图像处理模型中金属伪影的结构参数时,首先利用训练样本调整第一区域的第一区域结构参数,得到调整后的第一区域结构参数,然后继续对该第一区域结构参数进行调整,将调整得到的区域结构参数确定为第二区域的调整后的第二区域结构参数,将金属伪影在结构上的特性作为金属伪影去除时的先验知识,充分考虑了金属伪影是旋转对称的多个区域这一结构特性,能够提高图像处理模型去除金属伪影的效果,同时由于只需基于训练样本调整第一区域结构参数,无需直接对第二区域结构参数本身进行调整,仅需利用对第一区域结构参数的调整结果,即可得到调整后的第二区域结构参数,能够提高模型的训练效率。
对于上述实施例中的图像处理模型来说,在一种可能实现方式中,该图像处理模型的创建过程为:
(一)模型原理:
包含金属伪影的医学图像可用下述公式一来表示:
公式一:I⊙Y=I⊙X+I⊙A
其中,为医学图像,为去除金属伪影后的目标图像,为非金属图像,用于表示医学图像中的非金属区域,H和W分别为图像的高度和宽度,非金属图像中的像素值为0或者1,0表示金属区域,1表示非金属区域;A为伪影信息,表示医学图像中的金属伪影,⊙表示逐点乘法运算。
例如,上述公式一能够表示为图4所示的图像,医学图像401由医学图像402和金属伪影403确定。
其中,金属伪影的伪影信息可表示为:
其中,表示金属伪影的结构参数,该结构参数采用卷积核表示,p×p是卷积核的尺寸,表示金属伪影的位置信息,L表示金属伪影中的多个区域的总数量,K表示每个区域的卷积核的总数量,k表示每个区域的第k个卷积核,θl表示金属伪影中第l个区域的角度,θl=2π(l-1)/L,表示二维平面卷积运算。
其中,对于结构参数来说,参见上述图4所示的卷积核C,可以看出,采用卷积核表示金属伪影中某个区域的结构时,能够通过对该卷积核进行旋转,得到表示该金属伪影中另一区域的结构的卷积核,本申请实施例基于金属伪影的这种特性,能够将结构参数表示为:
其中,aqtk和bqtk表示待训练的调整参数,表示第l个区域的旋转参数,该旋转参数可表示为:
xij表示第l个区域的第k个卷积核中第i行第j列的元素:
xij=[xi,xj]T=[(i-(p+1)/2)h,(j-(p+1)/2)h]T
其中,p表示卷积核的尺寸,h为预设参数,例如h为1/4或其他数值,xi表示第i行, xj表示第j列。
均为旋转的傅里叶基函数,分别表示为:

其中,Ω(x)表示径向mask函数,且Ω(x)≥0,在||x||≥((p+1)/2)h的情况下,Ω(x)=0,该((p+1)/2)h为参考数值。在q≤p/2的情况下,否则,同理,在t≤p/2的情况下,否则,
将公式二代入公式一,可以得到以下公式来表示医学图像:
其中,分别由Ckl)和Mlk堆叠而成。Y是医学图像,I是非金属图像,医学图像和非金属图像均为已知的,去除医学图像中的金属伪影的过程,也即是确定公式五中的位置信息M和结构参数C的过程,得到位置信息M和结构参数C,即可确定出目标图像X。
由于结构参数C为金属伪影本身的特性,不与医学图像有关,因此可以假定结构参数C为已知的,则仅需确定位置信息M和目标图像X。其中,确定位置信息M和目标图像X的方式可通过优化以下公式来实现:
其中,α和β为折衷参数,f1(·)和f2(·)为正则函数,该正则函数f1(·)表示位置特征,该位置特征表示金属伪影的位置信息满足的特征,属于金属伪影的位置信息的先验知识,该正则函数f2(·)表示图像特征,该图像特征表示不包括金属伪影的图像满足的特征,属于不包括金属伪影的图像的先验知识。能够使上述公式六为最小值的位置信息M和目标图像X。
(二)模型求解:本申请实施例中,采用近端梯度技术交替更新位置信息M和目标图像X的方式,来优化公式五。其中,在第n次迭代中,确定位置信息的方式可通过优化以下公式来实现:

其中,M(n)表示第n次迭代中获取到的位置信息,X(n)表示第n次迭代中获取到的目标图像,分别表示f1(·)和f2(·)的近端算子,M(n-1)表示第n-1次迭代中获取到的位置信息,X(n-1)表示第n-1次迭代中获取到的目标图像,η1和η2为更新步长,分别表示位置梯度信息和图像梯度信息。
其中,分别表示为:

(三)模型创建:为了使用图像处理模型来确定位置信息和目标图像,可以根据上述公式五,构建图像处理模型,由于需要进行多次迭代来确定位置信息和目标图像,则该图像处理模型包括多个位置提取网络和多个伪影去除网络。其中,位置提取网络M-net和伪影去除网络X-net分别表示为:

其中,均为残差网络,分别表征公式六中的近端算子M(n)表示第n次迭代中获取到的位置信息,X(n)表示第n次迭代中获取到的目标图像,M(n-1)表示第n-1次迭代中获取到的位置信息,X(n-1)表示第n-1次迭代中获取到的目标图像,η1和η2为更新步长,分别表示位置梯度信息和图像梯度信息。在第n个迭代过程中,的网络参数分别为η1和η2为更新步长。
在一种可能实现方式中,近端网络由位置残差网络表示。如图5所示的位置提取网络501和伪影去除网络502,将上一个位置提取网络输出的M(n-1)输入到位置提取网络501,在位置提取网络501中将M(n-1)融合后,输入残差网络,由残差网络输出M(n)。同理,将上一个位移去除网络输出的X(n-1)输入伪影去除网络502,在伪影去除网络502中将X(n-1)融合后,输入残差网络,由残差网络输出X(n)。其中,残差网络依次包括:卷积层、Batch Normalization(批量标准化)层、ReLU(线性整流)层、卷积层、Batch Normalization层以及跨链接层。可选地,卷积层的卷积核大小为3*3,步长为1。需要说明的是,近端网络还可以采用其他类型的网络结构,本申请实施例对此不做限定。
基于上述图像处理模型的创建过程,能够创建如图6所示的图像处理模型,其中,每个图像处理子模型中均包括一个位置提取网络和一个伪影去除网络。
需要说明的是,本申请实施例提供的图像处理模型,是基于图像处理领域的金属伪影去除任务所创建的,图像处理模型中的网络结构是由包括金属伪影的医学图像的结构特性以及金属伪影的结构特性所决定的,因此该图像处理模型中的每一个操作均具有物理意义,整个图像处理模型的结构相当于是白箱操作,具有很好的模型可解释性。
图7是本申请实施例提供的又一种图像处理方法的流程图,本申请实施例中,计算机设备基于图像处理模型中的目标数量个图像处理子模型,对医学图像进行处理,得到去除金属伪影后的目标图像,图像处理模型多个图像处理子模型,每个图像处理子模型包括位置提取网络和伪影去除网络,则该方法包括如下步骤。
701、计算机设备调用位置提取网络,确定医学图像中的金属伪影的多个区域位置信息。
计算机设备将医学图像输入位置提取网络,基于位置提取参数,对医学图像进行位置提取,得到多个区域位置信息,每个区域位置信息表示医学图像包含的金属伪影中的每个区域的位置。需要说明的是,在本申请实施例中,计算机设备会基于多个图像处理子模型,对医学图像进行多次迭代处理,每个图像处理子模型,均会输出医学图像的位置信息、伪影信息和目标图像,当前图像处理子模型输出的位置信息和目标图像,会作为下一个图像处理子模型的输入。需要说明的是,该图像处理模型为训练后的图像处理模型,训练后的图形处理模型包括调整后的位置提取参数。计算机设备将医学图像输入训练后的图像处理模型的位置提取网络中,基于调整后的位置提取参数,对医学图像进行位置提取,得到多个区域位置信息。
在当前的位置提取网络为第一个位置提取网络的情况下,计算机设备获取已存储的多个参考区域位置信息和第三参考图像。可选地,该参考区域位置信息为计算机设备预先设置的位置信息,例如该位置信息为0。可选地,该第三参考图像是对医学图像进行伪影去除得到的,并且得到该第三参考图像所采用的伪影去除方法与本申请实施例提供的方法不同,也即是第三参考图像与本申请实施例中的目标图像是对医学图像采用不同的方式进行伪影去除得到的,例如第三参考图像是采用线性插值算法对医学图像进行伪影去除所得到的图像,或者第三参考图像是采用高斯滤波或其他滤波算法对医学图像进行伪影去除所得到的图像。
在位置提取网络为第一个位置提取网络之后的位置提取网络的情况下,计算机设备获取上一个图像处理子模型输出的位置信息和目标图像,并将上一个图像处理子模型输出的位置信息和上一个图像处理子模型输出的目标图像确定位置提取网络的输入。
在位置提取网络为第一个位置提取网络的情况下,计算机设备确定该位置信息的过程包括:计算机设备将医学图像、第三参考图像和参考位置信息输入至位置提取网络,位置提取网络通过对医学图像和第三参考图像进行对比,确定位置梯度信息,根据位置梯度信息对参考位置信息进行调整,输出该位置信息,其中,该过程与下述步骤705中输出调整后的位置信息的过程同理,在此暂不作详细说明。
702、计算机设备调用伪影去除网络,基于多个区域位置信息、调整后的第一区域结构参数和调整后的第二区域结构参数,构建第一伪影信息。
其中,训练后的图像处理模型包括调整后的结构参数,调整后的结构参数包括调整后的第一区域结构参数和调整后的第二区域结构参数,该调整后的图像处理模型中的伪影去除网络中包括该调整后的结构参数,则计算机设备将位置信息输入至伪影去除网络,该伪影去除网络根据调整后的结构参数和位置信息,即分别根据每个区域的区域位置信息和的区域结构参数,确定每个区域的区域伪影信息,然后将多个区域伪影信息构成第一伪影信息,该第一伪影信息表示预测得到的该医学图像中的金属伪影的位置信息以及结构信息。可选地,该伪影去除网络中包括卷积操作,结构参数为卷积核,则计算机设备在伪影去除网络中,将结构参数和位置信息进行卷积,得到该第一伪影信息。
在一种可能实现方式中,该结构参数采用卷积核来表示,计算机设备将每个区域结构参数与的区域位置信息进行卷积处理,得到该区域伪影信息。
703、计算机设备调用伪影去除网络,根据第一伪影信息,对医学图像进行伪影去除,得到目标图像。
计算机设备得到的第一伪影信息是表示医学图像中的金属伪影的信息,则在该医学图像中去除该第一伪影信息,即可得到医学图像的目标图像。
在一种可能实现方式中,医学图像中包括金属区域和非金属区域,金属区域是医学图像中的金属所在的区域,非金属区域是医学图像中不包括金属的区域。计算机设备将医学图像中的非金属区域确定为非金属图像,在第一伪影信息中确定属于非金属区域的第一伪影信息,在该非金属图像中,去除属于非金属区域的第一伪影信息,得到该目标图像。
在一种可能实现方式中,计算机设备将医学图像与第一伪影信息之间的差异信息,确定为第二参考图像,对第二参考图像和已存储的第三参考图像进行加权,得到第四参考图像。其中,第三参考图像与目标图像是对医学图像采用不同的方式进行伪影去除得到的。
704、计算机设备调用下一个位置提取网络,通过将医学图像与目标图像进行对比,分别确定多个区域的区域位置梯度信息。
下一个位置提取网络是上述伪影去除网络的下一个图像处理子模型中的位置提取网络。计算机设备将目标图像作为下一个位置提取网络的输入,基于位置提取网络,通过将医学图像与目标图像进行对比,分别确定多个区域的区域位置梯度信息,区域位置梯度信息指示区域位置信息的变化幅度,也即是预测得到的区域位置信息向更加正确的方向进行变化的幅度。
在一种可能实现方式中,位置提取网络包括位置提取层。计算机设备调用位置提取层,将医学图像和目标图像之间的差异信息,确定为第二伪影信息,该第二伪影信息表示图像处 理模型预测得到的该医学图像中的金属伪影的位置信息以及结构信息。相比于第一伪影信息,该第二伪影信息为更加准确的伪影信息。计算机设备调用位置提取层,根据第一伪影信息和第二伪影信息,分别确定多个区域位置梯度信息。
可选地,计算机设备将第一伪影信息和第二伪影信息之间的差异信息,确定为伪影差异信息,根据伪影差异信息、调整后的第一区域结构参数和调整后的第二区域结构参数,确定区域位置梯度信息。由于第一伪影信息和第二伪影信息表示的是金属伪影的位置信息和结构信息,则该伪影差异信息表示的是金属伪影在位置信息上的差异以及在结构信息上的差异,而第一区域结构参数和第二区域结构参数表示的是金属伪影的结构特性,因此基于该伪影差异信息、调整后的第一区域结构参数和调整后的第二区域结构参数,能够确定出在位置信息上的差异,也即是在位置信息上的变化幅度,从而得到该区域位置梯度信息。
可选地,计算机设备确定医学图像中的非金属区域,非金属区域是指医学图像中不包括金属的区域。计算机设备在伪影差异信息中,确定位于非金属区域的伪影差异信息,根据调整后的第一区域结构参数、调整后的第二区域结构参数和位于非金属区域的伪影差异信息,确定多个区域位置梯度信息。
705、计算机设备调用该下一个位置提取网络,分别根据多个区域位置梯度信息对多个区域位置信息进行调整,得到调整后的多个区域位置信息。
706、计算机设备调用下一个伪影去除网络,根据调整后的多个区域位置信息对医学图像进行伪影去除,得到调整后的目标图像,直至得到目标数量个图像处理子模型输出的目标图像,将得到的最后一个目标图像确定为医学图像去除金属伪影后的图像。
图像处理模型中包括目标数量个图像处理子模型,伪影去除网络输出目标图像后,计算机设备继续将伪影去除网络输出的位置信息、目标图像以及医学图像,作为伪影去除网络的下一个图像处理子模型的输入,由下一个图像处理子模型输出下一个位置信息和下一个目标图像,直至目标数量个图像处理子模型均执行上述金属伪影的去除过程,该目标数量个图像处理子模型均输出目标图像,也即是直至得到图像处理模型中的最后一个图像处理子模型输出的目标图像,则完成整个图像处理过程。计算机设备将得到的最后一个目标图像确定为医学图像去除金属伪影后的图像。
在一种可能实现方式中,计算机设备调用伪影去除网络,基于调整后的多个区域位置信息、调整后的第一区域结构参数和调整后的第二区域结构参数,确定调整后的第一伪影信息,基于调整后的第一伪影信息,对医学图像进行伪影去除,直至得到目标数量个目标图像,将得到的最后一个目标图像确定为医学图像去除金属伪影后的图像。其中,确定调整后的第一伪影信息的过程与上述步骤702的过程同理,基于调整后的第一伪影信息对医学图像进行伪影去除的过程,与上述步骤703的过程同理。
在一种可能实现方式中,伪影去除网络包括图像重构层。计算机设备调用图像重构层,根据调整后的多个区域位置信息、调整后的第一区域结构参数和调整后的第二区域结构参数,构建调整后的伪影信息;将医学图像与调整后的伪影信息之间的差异信息,确定为第一参考图像;对第一参考图像和目标图像进行加权,得到调整后的目标图像。
例如,参见图8,将医学图像输入至图像处理模型,该图像处理模型基于上述步骤701-步骤706,对医学图像进行处理,得到目标图像。
需要说明的是,本申请实施例仅是以图像处理模型包括多个图像处理子模型为例进行说明,在另一实施例中,在图像处理模型包括一个图像处理子模型,即图像处理模型包括一个位置提取网络和一个伪影去除网络的情况下,将该伪影去除网络输出的目标图像作为去除金属伪影后的目标图像。
本申请实施例提供的方法,在训练图像处理模型时,基于金属伪影包含旋转对称的多个区域这一结构特性,将金属伪影在结构上的特性作为金属伪影去除时的先验知识,充分考虑了金属伪影是旋转对称的多个区域这一结构特性,从而在调用图像处理模型去除医学图像中的金属伪影时,提高了图像处理模型去除金属伪影的效果。
并且,本申请实施例进行多次迭代,将本次迭代的结果应用于下一次迭代过程中,来不断优化确定的目标图像,将最后一次迭代所得到目标图像确定为医学图像去除金属伪影后的图像,能够进一步保证去除金属伪影的效果。
并且,相关技术中,利用深度学习网络去除金属伪影,需要获取包括金属伪影的图像的弦图,对弦图进行处理。本申请实施例提供的方案为基于图像域的处理方法,无需搜集医学图像的弦图,降低了数据的获取成本。
图9是本申请实施例提供的再一种图像处理方法的流程图,包括图像处理模型的训练过程和测试过程。在训练过程中,计算机设备对样本医学图像进行预处理,使用图像处理模型,将预处理后的样本医学图像去除金属伪影,根据去除的结果迭代训练图像处理模型,直至迭代次数达到目标次数,保存训练好的图像处理模型。在测试过程中,计算机设备对医学图像进行预处理,并加载训练完成的图像处理模型,基于该图像处理模型,将预处理后的医学图像去除金属伪影,输出去除金属伪影后的目标图像。
图10是本申请实施例提供的一种图像处理装置的结构示意图。参见图10,该装置包括:
模型获取模块1001,用于获取图像处理模型,该图像处理模型包含结构参数,该结构参数表示金属伪影的结构,该结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,该第一区域为该金属伪影中的任一区域,该第二区域是该金属伪影中除该第一区域之外的区域;
模型训练模块1002,用于在训练该图像处理模型时,基于训练样本调整该第一区域结构参数,得到调整后的第一区域结构参数;基于该第一区域和该第二区域之间的角度差,调整该调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数;
其中,训练后的图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影。
本申请实施例提供的装置,基于金属伪影包含旋转对称的多个区域这一结构特性,在调整图像处理模型中金属伪影的结构参数时,首先利用训练样本调整第一区域的第一区域结构参数,得到调整后的第一区域结构参数,然后继续对该第一区域结构参数进行调整,将调整得到的区域结构参数确定为第二区域的调整后的第二区域结构参数,将金属伪影在结构上的特性作为金属伪影去除时的先验知识,充分考虑了金属伪影是旋转对称的多个区域这一结构特性,能够提高图像处理模型去除金属伪影的效果,同时由于只需基于训练样本调整第一区域结构参数,无需直接对第二区域结构参数本身进行调整,仅需利用对第一区域结构参数的调整结果,即可得到调整后的第二区域结构参数,能够提高模型的训练效率。
在一种可能实现方式中,第一区域结构参数由权重系数和第一区域的第一原始区域结构参数的乘积表示;模型训练模块1002,用于在训练该图像处理模型时,基于训练样本调整权重系数和第一原始区域结构参数,得到调整后的权重系数和调整后的第一原始区域结构参数,调整后的第一区域结构参数由调整后的权重系数和调整后的第一原始区域结构参数的乘积表示。
在另一种可能实现方式中,金属伪影中的每个区域包括至少一个条形伪影,第一区域的第一原始区域结构参数为矩阵,该矩阵用于表示第一区域;该模型训练模块1002,用于:
将矩阵中不小于参考数值的元素调整为目标数值,该目标数值表示所在的目标位置不对应第一区域中的任一子区域,调整后的矩阵中的非目标位置与第一区域中的多个子区域分别对应,非目标位置是指调整后的矩阵中除目标数值所在的目标位置之外的其他位置,非目标位置上的元素表示第一区域中对应的子区域中是否包含条形伪影,以及在子区域中包含条形伪影的情况下的条形伪影。
在另一种可能实现方式中,该模型训练模块1002,用于基于第一区域和第二区域之间的角度差,确定旋转参数;基于旋转参数,调整该调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数。
在另一种可能实现方式中,该第一区域结构参数为矩阵,该矩阵中的非目标位置上的元素表示该第一区域中对应的子区域中是否包含条形伪影,以及在该子区域中包含该条形伪影的情况下的该条形伪影,该非目标位置是指该矩阵中除目标数值所在的目标位置之外的其他位置;该模型训练模块1002,用于基于该旋转参数,调整该调整后的第一区域结构参数中元素的位置,以使得到的区域结构参数中该非目标位置的元素表示该第二区域中对应的子区域中是否包含条形伪影,以及在该子区域中包含该条形伪影的情况下的该条形伪影。
在另一种可能实现方式中,图像处理模型还包含位置提取参数,模型训练模块1002,还用于在训练该图像处理模型时,基于训练样本调整位置提取参数;其中,训练后的该图像处理模型用于基于调整后的结构参数和调整后的位置提取参数去除任一医学图像中的金属伪影。
在另一种可能实现方式中,参见图11,该装置还包括:
图像处理模块1003,用于调用训练后的图像处理模型,基于位置提取参数,对医学图像进行位置提取,得到多个区域位置信息,每个区域位置信息表示医学图像包含的金属伪影中的每个区域的位置;基于多个区域位置信息、调整后的第一区域结构参数和调整后的第二区域结构参数,构建第一伪影信息;基于第一伪影信息,对医学图像进行伪影去除,得到目标图像。
在另一种可能实现方式中,参见图11,该图像处理模块1003,包括:
位置梯度确定单元,用于通过将该医学图像与该目标图像进行对比,分别确定该多个区域的区域位置梯度信息,该区域位置梯度信息指示区域位置信息的变化幅度;
位置信息确定单元,用于基于多个区域位置梯度信息分别对该多个区域位置信息进行调整,得到调整后的多个区域位置信息;
伪影去除单元,用于基于该调整后的多个区域位置信息、该调整后的第一区域结构参数和该调整后的第二区域结构参数,确定调整后的第一伪影信息,基于该调整后的第一伪影信息,对该医学图像进行伪影去除,直至得到目标数量个目标图像,将得到的最后一个目标图像确定为该医学图像去除该金属伪影后的图像。
在另一种可能实现方式中,该位置梯度确定单元,用于:
将该医学图像和该目标图像之间的差异信息,确定为第二伪影信息;
基于该第一伪影信息和该第二伪影信息,分别确定该多个区域的区域位置梯度信息。
在另一种可能实现方式中,该位置梯度确定单元,用于:
将该第一伪影信息和该第二伪影信息之间的差异信息,确定为伪影差异信息;
基于该伪影差异信息、该调整后的第一区域结构参数和该调整后的第二区域结构参数,分别确定该多个区域的区域位置梯度信息。
在另一种可能实现方式中,图像处理模型包括位置提取网络和伪影去除网络;参见图11,该装置还包括:图像处理模块1003,用于调用该位置提取网络,对医学图像进行位置提取,得到金属伪影中多个区域的区域位置信息;调用伪影去除网络,基于多个区域位置信息、调整后的第一区域结构参数和调整后的第二区域结构参数,确定第一伪影信息,基于第一伪影信息,对医学图像进行伪影去除,得到目标图像。
上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再赘述。
需要说明的是:上述实施例提供的图像处理装置在处理图像时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将计算机设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的图像处理装置与图像处理方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请实施例还提供了一种计算机设备,该计算机设备包括处理器和存储器,存储器中存储有至少一条计算机程序,该至少一条计算机程序由处理器加载并执行,以实现上述实施例的图像处理方法所执行的操作。
可选地,该计算机设备提供为终端。图12是本申请实施例提供的一种终端1200的结构示意图。该终端1200可以是便携式移动终端,比如:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端1200还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
终端1200包括有:处理器1201和存储器1202。
处理器1201可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1201可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1201也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1201可以集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。在一些实施例中,处理器1201还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1202可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1202还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1202中的非暂态的计算机可读存储介质用于存储至少一条计算机程序,该至少一条计算机程序用于被处理器1201所执行以实现本申请中方法实施例提供的图像处理方法。
在一些实施例中,终端1200还可选包括有:外围设备接口1203和至少一个外围设备。处理器1201、存储器1202和外围设备接口1203之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口1203相连。具体地,外围设备包括:射频电路1204、显示屏1205、摄像头组件1206、音频电路1207和电源1208中的至少一种。
外围设备接口1203可被用于将I/O(Input/Output,输入/输出)相关的至少一个外围设备连接到处理器1201和存储器1202。在一些实施例中,处理器1201、存储器1202和外围设备接口1203被集成在同一芯片或电路板上;在一些其他实施例中,处理器1201、存储器1202和外围设备接口1203中的任意一个或两个可以在单独的芯片或电路板上实现。
射频电路1204用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1204通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1204将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。可选地,射频电路1204包括:天线系统、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1204可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1204还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1205用于显示UI(User Interface,用户界面)。该UI包括图形、文本、图标、视频及其它们的任意组合。当显示屏1205是触摸显示屏时,显示屏1205具有采集在显示屏1205的表面或表面上方的触摸信号的能力。触摸信号可以作为控制信号输入至处理器1201进行处理。显示屏1205还用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1205可以为一个,设置在终端1200的前面板;在另一些实施例中,显示屏1205可以为至少两个,分别设置在终端1200的不同表面或呈折叠设计;在另一些实施例中,显示屏1205可以是柔性显示屏,设置在终端1200的弯曲表面上或折叠面上。甚至,显示屏1205还可以设置成非矩形的不规则图形,也即异形屏。显示屏1205可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1206用于采集图像或视频。可选地,摄像头组件1206包括前置摄像头和后置摄像头。前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1206还包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可用于不同色温下的光线补偿。
音频电路1207可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,将声波转换为电信号输入至处理器1201进行处理,或输入至射频电路1204以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1200的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1201或射频电路1204的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1207还可以包括耳机插孔。
电源1208用于为终端1200中的各个组件进行供电。电源1208可以是交流电、直流电、一次性电池或可充电电池。当电源1208包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
本领域技术人员可以理解,图12中示出的结构并不构成对终端1200的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
可选地,该计算机设备提供为服务器。图13是本申请实施例提供的一种服务器的结构示意图,该服务器1300可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(Central Processing Units,CPU)1301和一个或一个以上的存储器1302,其中,存储器1302中存储有至少一条计算机程序,该至少一条计算机程序由处理器1301加载并执行以实现上述各个方法实施例提供的方法。当然,该服务器还可以具有有线或无线网络接口、键盘以及输入输出接口等部件,以便进行输入输出,该服务器还可以包括其他用于实现设备功能的部件,在此不做赘述。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有至少一条计算机程序,该至少一条计算机程序由处理器加载并执行,以实现上述实施例的图像处理方法所执行的操作。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序被处理器执行时实现上述实施例的图像处理方法所执行的操作。
在一些实施例中,本申请实施例所涉及的计算机程序可被部署在一个计算机设备上执行,或者在位于一个地点的多个计算机设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算机设备上执行,分布在多个地点且通过通信网络互连的多个计算机设备可以组成区块链系统。
可以理解的是,在本申请的具体实施方式中,涉及到用户信息等相关的数据,当本申请以上实施例运用到具体产品或技术中时,需要获得用户许可或者同意,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的医学图像等都是在充分授权的情况下获取的。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上仅为本申请实施例的可选实施例,并不用以限制本申请实施例,凡在本申请实施例的精神和原则之内所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (15)

  1. 一种图像处理方法,所述方法包括:
    计算机设备获取图像处理模型,所述图像处理模型包含结构参数,所述结构参数表示金属伪影的结构,所述结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,所述第一区域为所述金属伪影中的任一区域,所述第二区域是所述金属伪影中除所述第一区域之外的区域;
    所述计算机设备在训练所述图像处理模型时,基于训练样本调整所述第一区域结构参数,得到调整后的第一区域结构参数;基于所述第一区域和所述第二区域之间的角度差,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数;
    其中,训练后的所述图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影。
  2. 根据权利要求1所述的方法,其中,所述第一区域结构参数由权重系数和所述第一区域的第一原始区域结构参数的乘积表示;所述计算机设备在训练所述图像处理模型时,基于训练样本调整所述第一区域结构参数,得到调整后的第一区域结构参数,包括:
    所述计算机设备在训练所述图像处理模型时,基于所述训练样本调整所述权重系数和所述第一原始区域结构参数,得到调整后的权重系数和调整后的第一原始区域结构参数,所述调整后的第一区域结构参数由所述调整后的权重系数和所述调整后的第一原始区域结构参数的乘积表示。
  3. 根据权利要求2所述的方法,其中,所述金属伪影中的每个所述区域包括至少一个条形伪影,所述第一区域的第一原始区域结构参数为矩阵,所述矩阵用于表示第一区域;
    所述计算机设备在训练所述图像处理模型时,基于所述训练样本调整所述权重系数和所述第一区域的第一原始区域结构参数,得到调整后的权重系数和调整后的第一原始区域结构参数之后,所述方法还包括:
    所述计算机设备将所述矩阵中不小于参考数值的元素调整为目标数值,所述目标数值表示所在的目标位置不对应第一区域中的任一子区域,调整后的矩阵中的非目标位置与所述第一区域中的多个子区域分别对应,所述非目标位置是指所述调整后的矩阵中除所述目标数值所在的目标位置之外的其他位置,所述非目标位置上的元素表示所述第一区域中对应的子区域中是否包含条形伪影,以及在所述子区域中包含所述条形伪影的情况下的所述条形伪影。
  4. 根据权利要求1所述的方法,其中,所述计算机设备基于所述第一区域和所述第二区域之间的角度差,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数,包括:
    所述计算机设备基于所述第一区域和所述第二区域之间的角度差,确定旋转参数;
    所述计算机设备基于所述旋转参数,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数。
  5. 根据权利要求4所述的方法,其中,所述第一区域结构参数为矩阵,所述矩阵中的非目标位置上的元素表示所述第一区域中的子区域中是否包含条形伪影,以及在所述子区域中包含所述条形伪影的情况下的所述条形伪影,所述非目标位置是指所述矩阵中除目标数值所在的目标位置之外的其他位置;所述计算机设备基于所述旋转参数,调整所述调整后的第一区域结构参数,包括:
    所述计算机设备基于所述旋转参数,调整所述调整后的第一区域结构参数中元素的位置, 以使得到的区域结构参数中所述非目标位置的元素表示所述第二区域中对应的子区域中是否包含条形伪影,以及在所述子区域中包含该条形伪影的情况下的所述条形伪影。
  6. 根据权利要求1所述的方法,其中,所述图像处理模型还包含位置提取参数,所述位置提取参数用于从医学图像中提取金属伪影的位置信息,所述计算机设备获取图像处理模型之后,所述方法还包括:
    所述计算机设备在训练所述图像处理模型时,基于所述训练样本调整所述位置提取参数;
    其中,训练后的所述图像处理模型用于基于所述调整后的结构参数和调整后的位置提取参数去除任一医学图像中的金属伪影。
  7. 根据权利要求6所述的方法,其中,所述方法还包括:
    所述计算机设备调用训练后的所述图像处理模型,执行如下步骤:
    基于所述位置提取参数,对所述医学图像进行位置提取,得到多个区域位置信息,每个所述区域位置信息表示所述医学图像包含的所述金属伪影中的每个区域的位置;
    基于多个区域位置信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,构建第一伪影信息;
    基于所述第一伪影信息,对所述医学图像进行伪影去除,得到目标图像。
  8. 根据权利要求7所述的方法,其中,所述基于所述第一伪影信息,对所述医学图像进行伪影去除,得到目标图像之后,所述方法还包括:
    通过将所述医学图像与所述目标图像进行对比,分别确定所述多个区域的区域位置梯度信息,所述区域位置梯度信息指示区域位置信息的变化幅度;
    基于多个区域位置梯度信息分别对所述多个区域位置信息进行调整,得到调整后的多个区域位置信息;
    基于所述调整后的多个区域位置信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,确定调整后的第一伪影信息,基于所述调整后的第一伪影信息,对所述医学图像进行伪影去除,直至得到目标数量个目标图像,将得到的最后一个目标图像确定为所述医学图像去除所述金属伪影后的图像。
  9. 根据权利要求8所述的方法,其中,所述通过将所述医学图像与所述目标图像进行对比,分别确定所述多个区域的区域位置梯度信息,包括:
    将所述医学图像和所述目标图像之间的差异信息,确定为第二伪影信息;
    基于所述第一伪影信息和所述第二伪影信息,分别确定所述多个区域的区域位置梯度信息。
  10. 根据权利要求9所述的方法,其中,所述基于所述第一伪影信息和所述第二伪影信息,分别确定所述多个区域的区域位置梯度信息,包括:
    将所述第一伪影信息和所述第二伪影信息之间的差异信息,确定为伪影差异信息;
    基于所述伪影差异信息、所述调整后的第一区域结构参数和所述调整后的第二区域结构参数,分别确定所述多个区域的区域位置梯度信息。
  11. 根据权利要求1所述的方法,其中,所述图像处理模型包括位置提取网络和伪影去除网络;所述方法还包括:
    所述计算机设备调用所述位置提取网络,对所述医学图像进行位置提取,得到所述金属伪影中多个区域的区域位置信息;
    所述计算机设备调用所述伪影去除网络,基于多个区域位置信息、所述调整后的第一区 域结构参数和所述调整后的第二区域结构参数,确定第一伪影信息,基于所述第一伪影信息,对所述医学图像进行伪影去除,得到目标图像。
  12. 一种图像处理装置,所述装置包括:
    模型获取模块,用于获取图像处理模型,所述图像处理模型包含结构参数,所述结构参数表示金属伪影的结构,所述结构参数包括第一区域的第一区域结构参数和第二区域的第二区域结构参数,所述第一区域为所述金属伪影中的任一区域,所述第二区域是所述金属伪影中除所述第一区域之外的区域;
    模型训练模块,用于在训练所述图像处理模型时,基于训练样本调整所述第一区域结构参数,得到调整后的第一区域结构参数;基于所述第一区域的角度和所述第二区域的角度,调整所述调整后的第一区域结构参数,将得到的区域结构参数确定为调整后的第二区域结构参数;
    其中,训练后的所述图像处理模型用于基于调整后的结构参数去除任一医学图像中的金属伪影。
  13. 一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条计算机程序,所述至少一条计算机程序由所述处理器加载并执行,以实现如权利要求1至11任一权利要求所述的图像处理方法所执行的操作。
  14. 一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述至少一条计算机程序由处理器加载并执行,以实现如权利要求1至11任一权利要求所述的图像处理方法所执行的操作。
  15. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现权利要求1至11任一权利要求所述的图像处理方法所执行的操作。
PCT/CN2023/081924 2022-04-19 2023-03-16 图像处理方法、装置、计算机设备及存储介质 WO2023202285A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210409315.6A CN115115724A (zh) 2022-04-19 2022-04-19 图像处理方法、装置、计算机设备及存储介质
CN202210409315.6 2022-04-19

Publications (1)

Publication Number Publication Date
WO2023202285A1 true WO2023202285A1 (zh) 2023-10-26

Family

ID=83325461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081924 WO2023202285A1 (zh) 2022-04-19 2023-03-16 图像处理方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN115115724A (zh)
WO (1) WO2023202285A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115724A (zh) * 2022-04-19 2022-09-27 腾讯医疗健康(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN116228916B (zh) * 2023-05-10 2023-07-11 中日友好医院(中日友好临床医学研究所) 一种图像去金属伪影方法、系统及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886982A (zh) * 2017-02-20 2017-06-23 江苏美伦影像系统有限公司 Cbct图像环形伪影去除方法
CN113256529A (zh) * 2021-06-09 2021-08-13 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
WO2022039313A1 (ko) * 2020-08-18 2022-02-24 연세대학교 산학협력단 Ct영상의 금속 아티팩트 보정 방법 및 장치
CN115115724A (zh) * 2022-04-19 2022-09-27 腾讯医疗健康(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886982A (zh) * 2017-02-20 2017-06-23 江苏美伦影像系统有限公司 Cbct图像环形伪影去除方法
WO2022039313A1 (ko) * 2020-08-18 2022-02-24 연세대학교 산학협력단 Ct영상의 금속 아티팩트 보정 방법 및 장치
CN113256529A (zh) * 2021-06-09 2021-08-13 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN115115724A (zh) * 2022-04-19 2022-09-27 腾讯医疗健康(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG HONG, LI YUEXIANG, HE NANJUN, MA KAI, MENG DEYU, ZHENG YEFENG: "DICDNet: Deep Interpretable Convolutional Dictionary Network for Metal Artifact Reduction in CT Images", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE, USA, vol. 41, no. 4, 1 April 2022 (2022-04-01), USA, pages 869 - 880, XP093101559, ISSN: 0278-0062, DOI: 10.1109/TMI.2021.3127074 *

Also Published As

Publication number Publication date
CN115115724A (zh) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110348543B (zh) 眼底图像识别方法、装置、计算机设备及存储介质
CN111091576B (zh) 图像分割方法、装置、设备及存储介质
US20210343041A1 (en) Method and apparatus for obtaining position of target, computer device, and storage medium
WO2023202285A1 (zh) 图像处理方法、装置、计算机设备及存储介质
CN111091166B (zh) 图像处理模型训练方法、图像处理方法、设备及存储介质
WO2022134971A1 (zh) 一种降噪模型的训练方法及相关装置
CN112419326B (zh) 图像分割数据处理方法、装置、设备及存储介质
CN113256529B (zh) 图像处理方法、装置、计算机设备及存储介质
CN111598168B (zh) 图像分类方法、装置、计算机设备及介质
CN114332530A (zh) 图像分类方法、装置、计算机设备及存储介质
CN112990053B (zh) 图像处理方法、装置、设备及存储介质
US20230097391A1 (en) Image processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN108875767A (zh) 图像识别的方法、装置、系统及计算机存储介质
CN111950570B (zh) 目标图像提取方法、神经网络训练方法及装置
CN110211205A (zh) 图像处理方法、装置、设备和存储介质
CN113570645A (zh) 图像配准方法、装置、计算机设备及介质
CN114677350B (zh) 连接点提取方法、装置、计算机设备及存储介质
CN113516665A (zh) 图像分割模型的训练方法、图像分割方法、装置、设备
CN115131199A (zh) 图像生成模型的训练方法、图像生成方法、装置及设备
CN113674856A (zh) 基于人工智能的医学数据处理方法、装置、设备及介质
CN111598896A (zh) 图像检测方法、装置、设备及存储介质
EP4181061A1 (en) Method for reconstructing tree-shaped tissue in image, and device and storage medium
CN113257412B (zh) 信息处理方法、装置、计算机设备及存储介质
CN116704200A (zh) 图像特征提取、图像降噪方法及相关装置
CN113743186B (zh) 医学图像的处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23790946

Country of ref document: EP

Kind code of ref document: A1