CN115115724A - Image processing method, image processing device, computer equipment and storage medium - Google Patents
Image processing method, image processing device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115115724A CN115115724A CN202210409315.6A CN202210409315A CN115115724A CN 115115724 A CN115115724 A CN 115115724A CN 202210409315 A CN202210409315 A CN 202210409315A CN 115115724 A CN115115724 A CN 115115724A
- Authority
- CN
- China
- Prior art keywords
- region
- artifact
- image processing
- information
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 188
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 239000002184 metal Substances 0.000 claims abstract description 177
- 238000000034 method Methods 0.000 claims abstract description 66
- 238000012549 training Methods 0.000 claims abstract description 57
- 238000000605 extraction Methods 0.000 claims description 57
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 24
- 230000015654 memory Effects 0.000 claims description 17
- 230000000694 effects Effects 0.000 abstract description 9
- 230000008569 process Effects 0.000 description 25
- 238000005516 engineering process Methods 0.000 description 22
- 238000002591 computed tomography Methods 0.000 description 20
- 229910052755 nonmetal Inorganic materials 0.000 description 16
- 238000013473 artificial intelligence Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 230000002093 peripheral effect Effects 0.000 description 10
- 241000282414 Homo sapiens Species 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000007943 implant Substances 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 239000003826 tablet Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The embodiment of the application discloses an image processing method and device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring an image processing model, wherein the image processing model comprises structure parameters, the structure parameters represent the structure of the metal artifact, and the structure parameters comprise first region structure parameters corresponding to a first region and second region structure parameters corresponding to a second region; when the image processing model is trained, the first region structure parameters are adjusted based on the training sample, the first region structure parameters are adjusted based on the angle corresponding to the first region and the angle corresponding to the second region, the second region structure parameters corresponding to the second region are obtained, and the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structure parameters. The method fully considers the structural characteristic that the metal artifacts are a plurality of rotationally symmetric regions, and can improve the effect of removing the metal artifacts by the image processing model.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image processing method, an image processing device, computer equipment and a storage medium.
Background
Computed Tomography (CT) can detect tissue and organ structures in a human body without damage, and thus has wide application in the medical field. When CT images are acquired by adopting computed tomography, metal artifacts can appear in the acquired CT images under the influence of metal implants in a human body, and the quality of the CT images is influenced. In the related art, an image processing model is used to remove metal artifacts in a CT image, but the current image processing model has a poor effect of removing the metal artifacts.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, computer equipment and a storage medium, and improves the effect of removing metal artifacts. The technical scheme is as follows:
in one aspect, an image processing method is provided, and the method includes:
acquiring an image processing model, wherein the image processing model comprises structure parameters, the structure parameters represent the structure of the metal artifact, the structure parameters comprise first region structure parameters corresponding to a first region and second region structure parameters corresponding to a second region, the first region is any region in the metal artifact, and the second region is a region except the first region in the metal artifact;
when the image processing model is trained, adjusting the first region structure parameter based on a training sample, and adjusting the first region structure parameter based on an angle corresponding to the first region and an angle corresponding to the second region to obtain a second region structure parameter corresponding to the second region;
wherein the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structural parameters.
In another aspect, there is provided an image processing apparatus, the apparatus including:
a model obtaining module, configured to obtain an image processing model, where the image processing model includes structure parameters, where the structure parameters represent a structure of a metal artifact, and the structure parameters include a first region structure parameter corresponding to a first region and a second region structure parameter corresponding to a second region, where the first region is any one of the metal artifacts, and the second region is a region of the metal artifact other than the first region;
the model training module is used for adjusting the first region structure parameters based on a training sample when the image processing model is trained, and adjusting the first region structure parameters based on the angle corresponding to the first region and the angle corresponding to the second region to obtain second region structure parameters corresponding to the second region;
wherein the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structural parameters.
In a possible implementation manner, the structure parameter includes an adjustment coefficient and a plurality of original region structure parameters, and a product of the adjustment coefficient and one of the original region structure parameters represents a region structure parameter corresponding to a region; and the model training module is used for adjusting the adjustment coefficient and a first original region structure parameter corresponding to the first region based on the training sample when the image processing model is trained.
In another possible implementation manner, each of the regions of the metal artifacts includes at least one bar artifact, and a first original region structure parameter corresponding to the first region is a matrix, where the matrix is used to represent the first region;
the model training module is configured to:
adjusting elements, not smaller than the reference value, in the matrix to be target values, wherein the target values indicate that the positions of the elements do not correspond to any sub-area in the first area, the positions of the adjusted matrix except the position of the target value correspond to the sub-areas in the first area, respectively, and the elements of the positions of the matrix except the position of the target value indicate whether the corresponding sub-areas in the first area contain the bar artifacts or not and the shapes of the bar artifacts when the sub-areas contain the bar artifacts.
In another possible implementation manner, the model training module is configured to:
determining a rotation parameter corresponding to the second area, wherein the rotation parameter represents an angle difference between the corresponding second area and the first area;
and adjusting the first area structure parameter based on the rotation parameter to obtain the second area structure parameter.
In another possible implementation manner, the first region structure parameter is a matrix, elements at positions other than the position of the target value in the matrix indicate whether a corresponding sub-region in the first region includes a bar artifact or not, and a shape of the bar artifact when the sub-region includes the bar artifact, and the model training module is configured to:
based on the rotation parameters, adjusting the positions of the elements in the first region structure parameters, so that the elements at the positions other than the position where the target value is located in the obtained second region structure parameters indicate whether the corresponding sub-region in the second region contains a bar artifact or not, and the shape of the bar artifact when the sub-region contains the bar artifact.
In another possible implementation manner, the model training module is further configured to adjust the position extraction parameter based on the training sample when training the image processing model;
the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structure parameters and the adjusted position extraction parameters.
In another possible implementation manner, the apparatus further includes:
the image processing module is used for calling the trained image processing model, extracting the position of the medical image based on the position extraction parameter to obtain a plurality of region position information, and each region position information represents the position of each region in the metal artifact contained in the medical image; constructing first artifact information based on a plurality of region position information, the first region structure parameter and the second region structure parameter; and removing the artifacts of the medical image based on the first artifact information to obtain the target image.
In another possible implementation manner, the image processing module includes:
a position gradient determining unit, configured to determine, by comparing the medical image with the target image, region position gradient information corresponding to the plurality of regions, respectively, where the region position gradient information indicates a variation amplitude of the region position information;
a position information determining unit, configured to adjust the plurality of region position information based on a plurality of region position gradient information, respectively, to obtain a plurality of adjusted region position information;
an artifact removing unit, configured to determine adjusted first artifact information based on the adjusted multiple region position information, the first region structure parameter, and the second region structure parameter, perform artifact removal on the medical image based on the adjusted first artifact information until a target number of target images are obtained, and determine the obtained last target image as an image obtained by removing the metal artifact from the medical image.
In another possible implementation manner, the position gradient determination unit is configured to:
determining difference information between the medical image and the target image as second artifact information;
respectively determining region position gradient information corresponding to the plurality of regions based on the first artifact information and the second artifact information.
In another possible implementation manner, the position gradient determination unit is configured to:
determining difference information between the first artifact information and the second artifact information as artifact difference information;
respectively determining region position gradient information corresponding to the plurality of regions based on the artifact difference information, the first region structure parameter and the second region structure parameter.
In another possible implementation, the image processing model includes a location extraction network and an artifact removal network; the device further comprises:
the image processing module is used for calling the position extraction network, extracting the positions of the medical images and obtaining the regional position information corresponding to a plurality of regions in the metal artifacts; calling the artifact removing network, determining the first artifact information based on a plurality of region position information, the first region structure parameter and the second region structure parameter, and removing the artifact of the medical image based on the first artifact information to obtain the target image.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one computer program, the at least one computer program being loaded and executed by the processor to perform operations performed by the image processing method according to the above aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to perform the operations performed by the image processing method according to the above aspect.
In another aspect, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the operations performed by the image processing method of the above aspect.
According to the technical scheme, when the structural parameters of the metal artifact in the image processing model are adjusted, the structural parameters of the first region corresponding to the first region are adjusted, and then the structural parameters of the first region are adjusted to determine the structural parameters of the second region corresponding to the second region, the structural characteristics of the metal artifact are used as prior knowledge when the metal artifact is removed, the structural characteristics that the metal artifact is rotationally symmetric and multiple regions are fully considered, the effect of removing the metal artifact of the image processing model can be improved, meanwhile, the structural parameters of the first region are adjusted only based on a training sample, the structural parameters of the second region are not required to be adjusted, and the training efficiency of the model can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of an image processing method provided in an embodiment of the present application;
FIG. 3 is a flow chart of another image processing method provided by the embodiments of the present application;
FIG. 4 is a schematic diagram of a medical image provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a model structure provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of another model structure provided in embodiments of the present application;
FIG. 7 is a flowchart of another image processing method provided in the embodiments of the present application;
FIG. 8 is a diagram illustrating an image processing process according to an embodiment of the present disclosure;
fig. 9 is a flowchart of a further image processing method provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another image processing apparatus provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
It will be understood that the terms "first," "second," and the like as used herein may be used herein to describe various concepts, which are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first arrangement order may be referred to as a second arrangement order, and a second arrangement order may be referred to as a first arrangement order, without departing from the scope of the present application.
As used herein, the terms "at least one," "a plurality," "each," "any," and the like, at least one comprises one, two, or more than two, and a plurality comprises two or more than two, each referring to each of the corresponding plurality, and any referring to any one of the plurality. For example, the plurality of angles includes 3 angles, each angle refers to each of the 3 angles, and any one refers to any one of the 3 angles, which may be the first one, the second one, or the third one.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence base technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and the like.
Computer Vision technology (CV) is a science for researching how to make a machine "look", and more specifically, it refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. The computer vision technology generally includes image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and map construction, automatic driving, smart transportation and other technologies, and also includes common biometric identification technologies such as face Recognition and fingerprint Recognition.
Machine Learning (ML) is a multi-domain cross subject, and relates to multi-domain subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The method specially studies how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and researched in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical services, smart customer service, internet of vehicles, automatic driving, smart traffic and the like.
According to the image processing method provided by the embodiment of the application, computer vision technology, machine learning technology and the like in artificial intelligence are utilized, the medical image comprising the metal artifact can be subjected to artifact processing, and the image with the metal artifact removed is obtained.
The image processing method provided by the embodiment of the application can be used in computer equipment. Optionally, the computer device is a terminal or a server. Optionally, the server is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. Optionally, the terminal is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
In one possible implementation, the computer program according to the embodiments of the present application may be deployed to be executed on one computer device or on multiple computer devices located at one site, or may be executed on multiple computer devices distributed at multiple sites and interconnected by a communication network, where the multiple computer devices distributed at the multiple sites and interconnected by the communication network can form a block chain system.
In one possible implementation, the computer device for training the image processing model in the embodiment of the present application is a node in a blockchain system, and the node is capable of storing the trained image processing model in the blockchain, and then the node or nodes corresponding to other devices in the blockchain may remove the metal artifact in the image based on the image processing model.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. The implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network. The terminal 101 has installed thereon a target application served by the server 102, and the terminal 101 can realize functions such as data transmission, image processing, and the like through the target application. For example, the target application is an image processing application that is capable of removing metal artifacts in CT images.
The server 102 trains an image processing model, the image processing model is used for removing metal artifacts in the images, the server 102 sends the trained image processing model to the terminal 101, the terminal 101 stores the received image processing model, and any medical image including the metal artifacts can be processed subsequently based on the image processing model to obtain the images with the metal artifacts removed.
The image processing method provided by the embodiment of the application can be applied to various scenes.
For example, in the medical field, a patient is scanned to obtain a CT image of the patient, and a doctor can determine the state of the patient based on the CT image of the patient and other relevant information about the patient. However, if a patient has a metal implant in the body during the scanning of the patient, metal artifacts appear in the CT images, which not only reduce the quality of the CT images, but also adversely affect the diagnostic procedure of the doctor. Therefore, the image processing method provided by the embodiment of the application can be adopted to remove the metal artifacts in the CT image and improve the quality of the CT image, thereby providing accurate auxiliary information in the clinical diagnosis process of a doctor.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application. The execution subject of the embodiment of the application is computer equipment. Referring to fig. 2, the method comprises the steps of:
201. the computer equipment acquires an image processing model, wherein the image processing model comprises structure parameters, the structure parameters represent the structure of the metal artifact, the structure parameters comprise first region structure parameters corresponding to a first region and second region structure parameters corresponding to a second region, the first region is any region in the metal artifact, and the second region is a region in the metal artifact except the first region.
The image processing model is used for removing metal artifacts in the medical images, and the metal artifacts refer to noise information caused by metal in the process of generating the medical images. The metal artifact and the metal causing the metal artifact are included in the medical image. For example, the medical image is a CT image obtained by computed tomography scanning a target object, and the metal artifact in the CT image is noise generated around metal and in the entire CT image due to absorption and reflection of X-rays by metal in or on the body of the target object, and the like.
In the embodiment of the application, the metal artifact is a rotationally symmetric strip-shaped structure, the metal artifact comprises a plurality of rotationally symmetric regions, each region comprises at least one strip, and the structure of the metal artifact is the characteristic of the metal artifact, so that for an image processing model, a structure parameter is set in the image processing model, and the structure parameter is trained, so that the structure parameter can accurately represent the structure of the metal artifact. Wherein the first region structure parameter represents a structure of a first region in the metal artifact and the second region structure parameter represents a structure of a second region in the metal artifact.
202. When the computer equipment trains the image processing model, the first region structure parameters are adjusted based on the training sample, the first region structure parameters are adjusted based on the angle corresponding to the first region and the angle corresponding to the second region, the second region structure parameters corresponding to the second region are obtained, and the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structure parameters.
Since the plurality of regions in the metal artifact are rotationally symmetric, that is, the plurality of regions have the same shape, when the structural parameter of the first region corresponding to the first region is determined, the structural characteristic that the first region and the second region are rotationally symmetric can be utilized, that is, the first region is rotated by a certain angle to be the second region, and the structural parameter of the first region is adjusted based on the angle corresponding to the first region and the angle corresponding to the second region to obtain the structural parameter of the second region corresponding to the second region. Therefore, in the process of training the image processing model, the computer equipment can determine the second area structure parameter by adjusting the first area structure parameter and then utilizing the adjusted first area structure parameter, thereby improving the training efficiency.
According to the method provided by the embodiment of the application, based on the structural characteristics that the metal artifact comprises a plurality of rotationally symmetric regions, when the structural parameters of the metal artifact in the image processing model are adjusted, the structural parameters of the first region corresponding to the first region are adjusted, and then the structural parameters of the first region are adjusted to determine the structural parameters of the second region corresponding to the second region, the structural characteristics of the metal artifact are used as prior knowledge when the metal artifact is removed, the structural characteristics that the metal artifact is a plurality of rotationally symmetric regions are fully considered, the effect of removing the metal artifact of the image processing model can be improved, and meanwhile, the training efficiency of the model can be improved as the structural parameters of the first region are adjusted only based on the training sample without adjusting the structural parameters of the second region.
Fig. 3 is a flowchart of another image processing method according to an embodiment of the present application. The execution subject of the embodiment of the application is a computer device, and referring to fig. 3, the method comprises the following steps.
301. A computer device obtains an image processing model that includes structural parameters and location extraction parameters.
Wherein the image processing model is used for removing metal artifacts in the medical image, the image processing model being an untrained model or a model that has been trained one or more times. The metal artifact refers to noise information caused by metal in the process of generating the medical image, and the structure parameter represents the structure of the metal artifact. The structure of the metal artifact belongs to the characteristic of the metal artifact, and for different medical images, the structures of the metal artifact in different medical images are the same, so in the embodiment of the application, the structure parameters are set in the image processing model, and the structure parameters are obtained by training the image processing model.
Moreover, since the metal artifact is a rotationally symmetric stripe structure, the metal artifact can be divided into a plurality of rotationally symmetric regions, each region includes at least one stripe, and for the plurality of regions, the shapes of any two regions are similar, and the difference is the angle in the metal artifact. Therefore, for the image processing model, when the structure parameter is set in the image processing model, the structure parameter of the first region corresponding to the first region is set, and the structure parameter of the second region corresponding to the second region can be obtained by adjusting the structure parameter of the first region according to the angle corresponding to the first region and the angle corresponding to the second region. Wherein the first region structure parameter represents a structure of a first region in the metal artifact and the second region structure parameter represents a structure of a second region in the metal artifact.
In a possible implementation manner, the metal artifact is divided into a plurality of rotationally symmetric regions, and a reference bar in the metal artifact is determined, where the reference bar refers to any bar in the metal artifact, target bars are determined in the plurality of regions respectively, and the target bar in each region corresponds to each other, for example, the target bar in the first region is the rightmost bar in the first region, and similarly, the target bar in the second region is also the rightmost bar in the second region. And then, respectively determining the included angle between each target strip and each reference strip as the angle corresponding to each region.
In another possible implementation manner, based on the total number of the divided regions in the metal artifact, the angle corresponding to each region is determined by using the following formula:
θ l =2π(l-1)/L
wherein, theta l And L is the total number of divided regions in the metal artifact. For example, L is 8.
Of course, the computer device can also determine the angle corresponding to each region in other manners, which is not limited in this embodiment of the application.
The position extraction parameters are used for extracting position information of metal artifacts in the medical images, and positions of metal positions in different medical images may be different, so that the position extraction parameters are set in the image processing model, and the position information of the metal positions is extracted from the medical images.
In one possible implementation, the image processing model includes a location extraction network for extracting location information in the medical image and an artifact removal network for removing metal artifacts in the medical image based on the location information and the structural parameters.
In another possible implementation, the image processing model includes a plurality of image processing sub-models, each image processing sub-model including a location extraction network and an artifact removal network.
302. A computer device obtains training samples.
The training sample comprises a sample medical image and a corresponding sample target image, the sample medical image is an image containing metal artifacts, and the sample target image is an image of the sample medical image after the metal artifacts are removed.
In one possible implementation, the computer device directly acquires a sample target image, which is an image that does not include sample metal artifacts. The computer device obtains artifact information including location information of the metal and structure information of the metal. The computer device synthesizes a sample medical image including metal artifacts according to the sample target image, artifact information and imaging parameters of the CT device by adopting a data simulation method, and then determines the sample medical image and the sample target image as a training sample.
In another possible implementation, the computer device obtains a sample medical image, and then performs artifact removal on the sample medical image by using a method other than the image processing model in the present application to obtain a sample target image.
Optionally, for the CT image, the computer device adjusts the pixel value of the image in the training sample, controls the pixel value of each pixel point in the range of [0, 1], and then converts the pixel value of each pixel point into the range of [0, 255 ].
Optionally, the computer device crops the images of the training sample to a target size, and then randomly performs horizontal mirror inversion or vertical mirror inversion on each image, thereby improving the diversity of the images in the training sample.
303. When the computer equipment trains the image processing model, the first region structure parameters and the position extraction parameters are adjusted based on the training samples, and the first region structure parameters are adjusted based on the angle corresponding to the first region and the angle corresponding to the second region, so that the second region structure parameters corresponding to the second region are obtained.
The computer equipment calls the image processing model, processes the sample medical image to obtain a prediction target image, and adjusts the first region structure parameter and the position extraction parameter based on the prediction target image and the sample target image.
In the embodiment of the application, since the shapes of the first region and the second region are the same, only the first region structure parameter corresponding to the first region needs to be adjusted based on the training sample, and then the second region structure parameter corresponding to the second region is determined based on the adjusted first region structure parameter.
In one possible implementation, the structure parameter includes an adjustment coefficient and a plurality of original region structure parameters, and a product of the adjustment coefficient and one of the original region structure parameters represents a region structure parameter corresponding to a region, that is, for a first region structure parameter, the first region structure parameter is represented by a product of the adjustment coefficient and a first original region structure parameter corresponding to the first region. Then, when the image processing model is trained, the adjustment coefficient and the first original region structure parameter corresponding to the first region are adjusted based on the training sample, and then the adjusted first region structure parameter is determined based on the adjusted adjustment coefficient and the adjusted first original region structure parameter. For the second area structure parameter, the adjusted second area structure parameter is determined based on the adjusted adjustment coefficient, the angle corresponding to the first area, the angle corresponding to the second area, and the adjusted first original area structure parameter.
In one possible implementation manner, each of the regions of the metal artifacts includes at least one bar artifact, and the first original region structure parameter corresponding to the first region is a matrix, where the matrix is used to represent the first region; after the matrix is obtained, elements which are not smaller than a reference value in the matrix are adjusted to be target values, the target values indicate that the positions of the elements do not correspond to any sub-region in the first region, other positions except the position of the target value in the adjusted matrix respectively correspond to a plurality of sub-regions in the first region, wherein the element at each position in the matrix indicates whether a corresponding sub-region in the first region contains a bar artifact or not, and the shape of the bar artifact if the sub-region contains the bar artifact. The reference value is determined based on the size of the convolution kernel representing the region structure parameter in the image processing model and a preset value, for example, if the size of the convolution kernel is P × P and the preset value is h, the reference value is ((P +1)/2) h, where h is any value greater than 0 and P is an odd number. The target value is a preset value, for example, the target value is 0 or other values.
In one possible implementation, the computer device determines a rotation parameter corresponding to the second region, the rotation parameter representing an angular difference between the corresponding second region and the first region, e.g., the rotation parameter isWherein, theta l This indicates the angular difference between the second region and the first region, and when the angle corresponding to the first region is 0, this angular difference is the angle corresponding to the second region. And then adjusting the first region structure parameter based on the rotation parameter to obtain the second region structure parameter.
In a possible implementation manner, the first region structure parameter is a matrix, elements at positions other than the position of the target value in the matrix indicate whether a corresponding sub-region in the first region contains a bar artifact, and shapes of the bar artifact if the sub-region contains the bar artifact, and based on the rotation parameter, positions of the elements in the first region structure parameter are adjusted, so that elements at positions other than the position of the target value in the obtained second region structure parameter indicate whether a corresponding sub-region in the second region contains a bar artifact, and shapes of the bar artifact if the sub-region contains the bar artifact. That is, the positions of the respective elements in the first area structure parameter are adjusted so that the shape corresponding to the area indicated by the adjusted area structure parameter is not changed, but the angle is changed to the angle corresponding to the second area.
In one possible implementation, the smaller the error information between the prediction target image and the sample target image, the more accurate the image processing model is. The computer device determines error information between the predicted target image and the sample target image, and trains the image processing model according to the determined error information, so that the error information is smaller and smaller, and the image processing model is more and more accurate.
In one possible implementation, the computer device determines difference information between the sample medical image and the sample target image as sample artifact information, the image processing model further outputs prediction artifact information, the prediction artifact information is artifact information output by the image processing model, and the sample artifact information is real artifact information corresponding to the sample medical image, so that the smaller the error information between the prediction artifact information and the sample artifact information, the more accurate the image processing model is. Therefore, the computer device respectively determines the error information between the prediction target image and the sample target image and the error information between the prediction artifact information and the sample artifact information, and trains the image processing model according to the determined error information, so that the error information is smaller and smaller, and the image processing model is more and more accurate.
For example, the computer device determines the error information using the following equation:
where L represents error information. Mu.s n 、λ 1 And λ 2 To compromise the parameters, weights are used to balance the error information items. X represents a sample target image, Y represents a sample medical image, and I represents a non-metal image corresponding to the sample medical image. X (n) Represents the nth prediction target image, A (n) Representing the nth prediction artifact information, N representing the total number of iterations in the image processing model, N representing the nth iteration process,representing 2-norm operation, | ·| non-conducting phosphor 1 Representing a 1-norm operation.
304. And calling the trained image processing model by the computer equipment, and processing the medical image to obtain the target image from which the metal artifact is removed.
The process of calling the image processing model to remove the metal artifacts in the medical image participates in the following embodiment shown in fig. 7, which is not described herein again.
According to the method provided by the embodiment of the application, based on the structural characteristics that the metal artifact comprises a plurality of rotationally symmetric regions, when the structural parameters of the metal artifact in the image processing model are adjusted, the structural parameters of the first region corresponding to the first region are adjusted, and then the structural parameters of the first region are adjusted to determine the structural parameters of the second region corresponding to the second region, the structural characteristics of the metal artifact are used as prior knowledge when the metal artifact is removed, the structural characteristics that the metal artifact is a plurality of rotationally symmetric regions are fully considered, the effect of removing the metal artifact of the image processing model can be improved, and meanwhile, the training efficiency of the model can be improved as the structural parameters of the first region are adjusted only based on the training sample without adjusting the structural parameters of the second region.
For the image processing model in the above embodiment, in a possible implementation manner, the creation process of the image processing model is:
model principle:
medical images containing metal artifacts may be represented by the following equation one:
the formula I is as follows: i ═ Y ═ I-
Wherein,is a medical image, and is a medical image,to remove the target image after the metal artifact,the image is a non-metal image and is used for representing a non-metal area in the medical image, H and W are respectively the height and the width of the image, the pixel value in the non-metal image is 0 or 1, 0 represents a metal area, and 1 represents a non-metal area; a is artifact information indicating a metal artifact in the medical image, an indicates a point-by-point multiplication operation.
For example, the formula one above can be represented as the image shown in fig. 4, the medical image 401 being determined by the medical image 402 and the metal artifact 403.
The artifact information corresponding to the metal artifact can be represented as:
wherein,structural parameters representing metal artifacts, the structural parameters being represented by convolution kernels, p x p being the size of the convolution kernels,position information representing the metal artifact, L representing the total number of a plurality of regions in the metal artifact, K representing the total number of convolution kernels corresponding to each region, K representing the kth convolution kernel corresponding to each region, and theta l Representing the angle, theta, corresponding to the ith region in the metal artifact l =2π(l-1)/L,Representing a two-dimensional planar convolution operation.
For the structural parameters, referring to the convolution kernel C shown in fig. 4, it can be seen that, when a convolution kernel is used to represent the structure of a certain region in a metal artifact, the convolution kernel can be rotated to obtain a convolution kernel representing the structure of another region in the metal artifact, and the embodiment of the present application can represent the structural parameters as follows based on such a characteristic of the metal artifact:
wherein, a qtk And b qtk Indicating the tuning parameters to be trained and,the rotation parameter corresponding to the ith area is expressed as:
x ij and (3) representing the elements of the ith row and the jth column in the kth convolution kernel corresponding to the ith area:
x ij =[x i ,x j ] T =[(i-(p+1)/2)h,(j-(p+1)/2)h] T
where p denotes the size of the convolution kernel, h is a predetermined parameter, e.g., h is 1/4 or some other value, x i Denotes the ith row, x j Representing column j.
where Ω (x) represents a radial mask function, and Ω (x) ≧ 0, in the case of | | | x | ≧ ((p +1)/2) h, Ω (x) ═ 0, and ((p +1)/2) h is a reference value.In the case that q is less than or equal to p/2,if not, then,in the same way, the method has the advantages of,in the case where t is less than or equal to p/2,if not, then,
taking the formula two-generation into formula one, the following formula can be obtained to represent the medical image:
wherein,andrespectively formed by C k (θ l ) And M lk And stacking the components. Y is a medical image, I is a non-metal image, the medical image and the non-metal image are both known, and the process of removing metal artifacts in the medical image, namely the process of determining the position information M and the structural parameters C in the formula V, obtains the position information M and the structural parameters C, and then determines the target image X.
Since the structural parameter C is a characteristic of the metal artifact itself and is not related to the medical image, it can be assumed that the structural parameter C is known and only the position information M and the target image X need to be determined. The manner of determining the position information M and the target image X can be implemented by optimizing the following formula:
where α and β are trade-off parameters, f 1 (. and f) 2 (. o) is a regularization function, which f 1 (. cndot.) represents a positional feature representing a feature satisfied by positional information of a metal artifact, prior knowledge corresponding to the positional information of the metal artifact, and the regular function f 2 (. cndot.) represents an image feature representing a feature satisfied by an image that does not include a metal artifact, pertaining to a priori knowledge of the correspondence of the image that does not include a metal artifact. The position information M and the target image X that can minimize the above-described formula six.
(II) solving a model: in the embodiment of the application, a mode of alternately updating the position information M and the target image X by using a near-end gradient technology is adopted to optimize a formula five. In the nth iteration, the mode of determining the position information can be realized by optimizing the following formula:
wherein M is (n) Indicating the position information, X, obtained in the nth iteration (n) Representing the target image acquired in the nth iteration,andrespectively represents f 1 (. and f) 2 (. o) corresponding near-end operator, M (n-1) Representing position information, M, obtained in the (n-1) th iteration (n-1) Representing the target image, η, acquired in the (n-1) th iteration 1 And η 2 In order to update the step size,andrespectively representing positional gradient information and image gradient information.
(III) model creation: in order to determine the location information and the target image using the image processing model, the image processing model may be constructed according to equation five above, and include a plurality of location extraction networks and a plurality of artifact removal networks since a plurality of iterations are required to determine the location information and the target image.
Wherein, the position extraction network M-net and the artifact removal network X-net are respectively expressed as:
wherein,andall are residual error networks, and respectively represent near-end operators in the formula sixAndM (n) indicating the position information, X, obtained in the nth iteration (n) Representing the target image, M, acquired in the nth iteration (n-1) Indicating the position information, X, obtained in the (n-1) th iteration (n-1) Representing the target image, η, acquired in the (n-1) th iteration 1 And η 2 In order to update the step size,andrespectively representing positional gradient information and image gradient information. During the course of the n-th iteration,andthe corresponding network parameters are respectivelyAndη 1 and η 2 To update the step size.
In one possible implementation, the near-end network is represented by a location residual network. As shown in fig. 5, the position extraction network 501 and the artifact removal network 502 output M from the previous position extraction network (n-1) Input to the location extraction network 501, and M is extracted in the location extraction network 501 (n-1) Andafter being merged, the data are input into a residual error network, and M is output by the residual error network (n) . Similarly, the last displacement is removed from the X output by the network (n-1) Input to an artifact removal network 502, and X is input to the artifact removal network 502 (n-1) Andafter being merged, the merged data is input into a residual error network, and X is output by the residual error network (n) . Wherein, the residual error network includes in proper order: convolutional layers, Batch Normalization layers, ReLU (Linear commutation) layers, convolutional layers, Batch Normalization layers, and cross-link layers. Optionally, the convolution layer corresponds to a convolution kernel size of 3x3 with a step size of 1. It should be noted that the near-end network may also adopt other types of network structures, which is not limited in this embodiment of the application.
Based on the above-described creation process of the image processing model, it is possible to create the image processing model shown in fig. 6, in which each image processing sub-model includes a location extraction network and an artifact removal network.
It should be noted that the image processing model provided in the embodiment of the present application is created based on a metal artifact removal task in the image processing field, and a network structure in the image processing model is determined by a structural characteristic of a medical image including a metal artifact and a structural characteristic of the metal artifact, so that each operation in the image processing model has a physical meaning, and the structure of the entire image processing model is equivalent to a white box operation, and has a good model interpretability.
Fig. 7 is a flowchart of another image processing method provided in an embodiment of the present application, where in the embodiment of the present application, a computer device processes a medical image based on a target number of image processing submodels in an image processing model to obtain a target image with metal artifacts removed, where the image processing model includes a plurality of image processing submodels, and each image processing submodel includes a location extraction network and an artifact removal network, and the method includes the following steps.
701. The computer device invokes a location extraction network to determine a plurality of regional location information for metal artifacts in the medical image.
The computer equipment inputs the medical image into the position extraction network, and performs position extraction on the medical image based on the position extraction parameters to obtain a plurality of region position information, wherein each region position information represents the position of each region in metal artifacts contained in the medical image. It should be noted that, in this embodiment of the present application, the computer device may perform iterative processing on the medical image for multiple times based on multiple image processing submodels, where each image processing submodel may output position information, artifact information, and a target image corresponding to the medical image, and the position information and the target image output by the current image processing submodel may be used as inputs of a next image processing submodel.
In the case where the current location extraction network is the first location extraction network, the computer device acquires the stored plurality of reference area location information and the third reference image. Optionally, the reference area position information is position information preset by the computer device, for example, the position information is 0. Optionally, the third reference image is obtained by removing the artifact from the medical image, and an artifact removing method used for obtaining the third reference image is different from the method provided in the embodiment of the present application, that is, the third reference image and the target image in the embodiment of the present application are obtained by removing the artifact from the medical image in different manners, for example, the third reference image is an image obtained by removing the artifact from the medical image by using a linear interpolation algorithm, or the third reference image is an image obtained by removing the artifact from the medical image by using a gaussian filtering algorithm or another filtering algorithm.
And under the condition that the position extraction network is a position extraction network behind the first position extraction network, the computer equipment acquires the position information and the target image output by the last image processing sub-model, and determines the input of the position extraction network according to the position information and the target image output by the last image processing sub-model.
In the case where the location retrieval network is the first location retrieval network, the process of the computer device determining the location information includes: the computer device inputs the medical image, the third reference image and the reference position information to the position extraction network, and the position extraction network determines position gradient information by comparing the medical image and the third reference image, adjusts the reference position information according to the position gradient information, and outputs the position information, wherein the process is the same as the process of outputting the adjusted position information in the following step 705, and the detailed description is not given here.
702. The computer device calls an artifact removing network, and constructs first artifact information based on the plurality of region position information, the first region structure parameters and the second region structure parameters.
The computer equipment inputs the position information into the artifact removing network, the artifact removing network determines the area artifact information corresponding to each area according to the structure parameters and the position information, namely according to the area position information corresponding to each area and the corresponding area structure parameters, and then the area artifact information forms first artifact information. Optionally, the artifact removing network includes a convolution operation, and the structure parameter is a convolution kernel, so that the computer device performs convolution on the structure parameter and the position information in the artifact removing network to obtain the first artifact information.
In a possible implementation manner, the structural parameters are represented by convolution kernels, and the computer device performs convolution processing on each region structural parameter and corresponding region position information to obtain the region artifact information.
703. And calling an artifact removing network by the computer equipment, and removing the artifact of the medical image according to the first artifact information to obtain the target image.
And if the first artifact information obtained by the computer equipment is information representing metal artifacts in the medical image, removing the first artifact information from the medical image to obtain a target image corresponding to the medical image.
In one possible implementation, the medical image includes a metal region and a non-metal region, the metal region is a region where metal in the medical image is located, and the non-metal region is a region where metal is not included in the medical image. The computer equipment determines a non-metal area in the medical image as a non-metal image, determines first artifact information belonging to the non-metal area in the first artifact information, and removes the first artifact information belonging to the non-metal area in the non-metal image to obtain the target image.
In one possible implementation manner, the computer device determines difference information between the medical image and the first artifact information as a second reference image, and weights the second reference image and the stored third reference image to obtain a fourth reference image. And the third reference image and the target image are obtained by removing the artifacts of the medical image in different modes.
704. And calling a next position extraction network by the computer equipment, and respectively determining regional position gradient information corresponding to the plurality of regions by comparing the medical image with the target image.
The next location extraction network is the location extraction network in the next image processing sub-model of the artifact removal network described above. The computer device takes the target image as the input of the next position extraction network, and respectively determines the regional position gradient information corresponding to the plurality of regions u by comparing the medical image with the target image based on the position extraction network.
In one possible implementation, the location extraction network includes a location extraction layer. The computer equipment calls a position extraction layer, determines difference information between the medical image and the target image as second artifact information, calls the position extraction layer, and respectively determines a plurality of regional position gradient information according to the first artifact information and the second artifact information.
Optionally, the computer device determines difference information between the first artifact information and the second artifact information as artifact difference information, and determines the position gradient information according to the artifact difference information, the first region structure parameter, and the second region structure parameter.
Optionally, the computer device determines a non-metallic region in the medical image, the non-metallic region referring to a region in the medical image that does not include metal. The computer equipment determines artifact difference information located in a non-metal area in the artifact difference information, and determines a plurality of area position gradient information according to the first area structure parameters, the second area structure parameters and the artifact difference information located in the non-metal area.
705. And the computer equipment calls the next position extraction network, and adjusts the position information of the plurality of areas according to the position gradient information of the plurality of areas respectively to obtain the adjusted position information of the plurality of areas.
706. And calling the next artifact removing network by the computer equipment, removing the artifacts of the medical image according to the adjusted position information of the plurality of areas to obtain an adjusted target image until the target images output by the target number of image processing submodels are obtained, and determining the obtained last target image as the image of the medical image after the metal artifacts are removed.
The image processing model comprises a target number of image processing submodels, after the artifact removing network outputs a target image, the computer equipment continues to take the position information, the target image and the medical image output by the artifact removing network as the input of a next image processing submodel of the artifact removing network, the next image processing submodel outputs next position information and a next target image until the target number of image processing submodels execute the metal artifact removing process, the target number of image processing submodels output the target image, namely, until a target image output by the last image processing submodel in the image processing model is obtained, and the whole image processing process is completed. And the computer equipment determines the obtained last target image as the image of the medical image after the metal artifact is removed.
In one possible implementation, the artifact removal network includes an image reconstruction layer. The computer equipment calls an image reconstruction layer, and constructs adjusted artifact information according to the adjusted plurality of region position information, the first region structure parameters and the second region structure parameters; determining difference information between the medical image and the adjusted artifact information as a first reference image; and weighting the first reference image and the target image to obtain an adjusted target image.
For example, referring to fig. 8, a medical image is input to an image processing model, which processes the medical image based on steps 701-706 described above, resulting in a target image.
It should be noted that, in the embodiment of the present application, the image processing model includes a plurality of image processing submodels as an example, and in another embodiment, when the image processing model includes one image processing submodel, that is, the image processing model includes one location extraction network and one artifact removal network, the target image output by the artifact removal network is taken as the target image from which the metal artifact is removed.
According to the method provided by the embodiment of the application, when the image processing model is trained, based on the structural characteristics that the metal artifact comprises a plurality of rotationally symmetric regions, the structural characteristics of the metal artifact are used as the prior knowledge when the metal artifact is removed, and the structural characteristics that the metal artifact is the plurality of rotationally symmetric regions are fully considered, so that when the image processing model is called to remove the metal artifact in the medical image, the effect of removing the metal artifact by the image processing model is improved.
In addition, the embodiment of the application performs multiple iterations, the result of the iteration is applied to the next iteration process to continuously optimize the determined target image, the target image obtained by the last iteration is determined as the image of the medical image after the metal artifact is removed, and the effect of removing the metal artifact can be further ensured.
In addition, in the related art, the metal artifact is removed by using the deep learning network, and a chord graph corresponding to an image including the metal artifact needs to be acquired and processed. The scheme provided by the embodiment of the application is an image domain-based processing method, and does not need to collect a chord graph corresponding to a medical image, so that the data acquisition cost is reduced.
Fig. 9 is a flowchart of another image processing method provided in an embodiment of the present application, including a training process and a testing process of an image processing model. In the training process, the computer equipment preprocesses the sample medical image, an image processing model is used for removing metal artifacts from the preprocessed sample medical image, the image processing model is iteratively trained according to the removal result until the iteration times reach the target times, and the trained image processing model is stored. In the testing process, the computer equipment preprocesses the medical image, loads the trained image processing model, removes the metal artifact of the preprocessed medical image based on the image processing model, and outputs the target image from which the metal artifact is removed.
Fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. Referring to fig. 10, the apparatus includes:
a model obtaining module 1001, configured to obtain an image processing model, where the image processing model includes structure parameters, where the structure parameters indicate a structure of a metal artifact, and the structure parameters include a first region structure parameter corresponding to a first region and a second region structure parameter corresponding to a second region, where the first region is any one of the metal artifacts, and the second region is a region of the metal artifact except the first region;
a model training module 1002, configured to adjust the first region structure parameter based on a training sample when training the image processing model, and adjust the first region structure parameter based on an angle corresponding to the first region and an angle corresponding to the second region to obtain a second region structure parameter corresponding to the second region;
wherein the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structural parameters.
The device provided by the embodiment of the application, based on the structural characteristics that the metal artifact comprises a plurality of rotationally symmetric regions, when the structural parameters of the metal artifact in the image processing model are adjusted, the structural parameters of the first region corresponding to the first region are adjusted, and then the structural parameters of the first region are adjusted to determine the structural parameters of the second region corresponding to the second region, the structural characteristics of the metal artifact are used as the prior knowledge when the metal artifact is removed, the structural characteristics that the metal artifact is a plurality of rotationally symmetric regions are fully considered, the effect of removing the metal artifact of the image processing model can be improved, and meanwhile, the training efficiency of the model can be improved as the structural parameters of the first region are only required to be adjusted based on a training sample without adjusting the structural parameters of the second region.
In a possible implementation manner, the structure parameter includes an adjustment coefficient and a plurality of original region structure parameters, and a product of the adjustment coefficient and one of the original region structure parameters represents a region structure parameter corresponding to a region; the model training module 1002 is configured to adjust the adjustment coefficient and a first original region structure parameter corresponding to the first region based on the training sample when training the image processing model.
In another possible implementation manner, each of the regions of the metal artifacts includes at least one bar artifact, and the first original region structure parameter corresponding to the first region is a matrix, where the matrix is used to represent the first region;
the model training module 1002 is configured to:
adjusting elements which are not less than a reference value in the matrix to be target values, wherein the target values indicate that the positions of the elements do not correspond to any sub-area in the first area, the positions of the adjusted matrix except the position of the target value correspond to a plurality of sub-areas in the first area respectively, and the elements of the positions of the matrix except the position of the target value indicate whether the sub-areas corresponding to the first area contain the bar artifacts or not and the shapes of the bar artifacts under the condition that the sub-areas contain the bar artifacts.
In another possible implementation, the model training module 1002 is configured to:
determining a rotation parameter corresponding to the second area, wherein the rotation parameter represents an angle difference between the corresponding second area and the first area;
and adjusting the first area structure parameter based on the rotation parameter to obtain the second area structure parameter.
In another possible implementation manner, the first region structure parameter is a matrix, elements in other positions of the matrix except for the position of the target value indicate whether a corresponding sub-region of the first region includes a bar artifact, and a shape of the bar artifact if the sub-region includes the bar artifact, and the model training module 1002 is configured to:
based on the rotation parameter, adjusting the positions of the elements in the first region structure parameter, so that the elements at the positions other than the position where the target value is located in the obtained second region structure parameter indicate whether the corresponding sub-region in the second region contains the bar artifact or not, and the shape of the bar artifact if the sub-region contains the bar artifact.
In another possible implementation manner, the model training module 1002 is further configured to adjust the position extraction parameter based on the training sample when training the image processing model;
wherein the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structural parameters and the adjusted position extraction parameters.
In another possible implementation, referring to fig. 11, the apparatus further includes:
an image processing module 1003, configured to invoke the trained image processing model, perform position extraction on the medical image based on the position extraction parameter, to obtain a plurality of area position information, where each area position information indicates a position of each area in the metal artifact included in the medical image; constructing first artifact information based on a plurality of region position information, the first region structure parameter and the second region structure parameter; and removing the artifact of the medical image based on the first artifact information to obtain the target image.
In another possible implementation, referring to fig. 11, the image processing module 1003 includes:
a position gradient determining unit, configured to determine, by comparing the medical image with the target image, region position gradient information corresponding to the plurality of regions, respectively, the region position gradient information indicating a variation width of the region position information;
a position information determining unit, configured to adjust the multiple pieces of region position information based on the multiple pieces of region position gradient information, respectively, to obtain multiple pieces of adjusted region position information;
and the artifact removing unit is used for determining the adjusted first artifact information based on the adjusted plurality of region position information, the first region structure parameter and the second region structure parameter, removing the artifacts of the medical image based on the adjusted first artifact information until a target number of target images are obtained, and determining the obtained last target image as the image of the medical image after the metal artifacts are removed.
In another possible implementation, the position gradient determination unit is configured to:
determining difference information between the medical image and the target image as second artifact information;
based on the first artifact information and the second artifact information, respectively determining region position gradient information corresponding to the plurality of regions.
In another possible implementation, the position gradient determination unit is configured to:
determining difference information between the first artifact information and the second artifact information as artifact difference information;
respectively determining region position gradient information corresponding to the plurality of regions based on the artifact difference information, the first region structure parameter and the second region structure parameter.
In another possible implementation, the image processing model includes a location extraction network and an artifact removal network; referring to fig. 11, the apparatus further includes:
an image processing module 1003, configured to invoke the location extraction network, perform location extraction on the medical image, and obtain location information of regions corresponding to multiple regions in the metal artifact; and calling the artifact removing network, determining the first artifact information based on a plurality of region position information, the first region structure parameter and the second region structure parameter, and removing the artifact of the medical image based on the first artifact information to obtain the target image.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
It should be noted that: in the image processing apparatus provided in the above embodiment, when processing an image, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the computer device may be divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, and the memory stores at least one computer program, and the at least one computer program is loaded and executed by the processor to implement the operations performed by the image processing method of the foregoing embodiment.
Optionally, the computer device is provided as a terminal. Fig. 12 is a schematic structural diagram of a terminal 1200 according to an embodiment of the present application. The terminal 1200 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
The terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) for rendering and drawing content required to be displayed by the display screen. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, display 1205, camera assembly 1206, audio circuitry 1207, and power supply 1208.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, various generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, disposed on a front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
A power supply 1208 is used to supply power to various components in the terminal 1200. The power supply 1208 may be an alternating current, direct current, disposable battery, or rechargeable battery. When power supply 1208 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Optionally, the computer device is provided as a server. Fig. 13 is a schematic structural diagram of a server 1300 according to an embodiment of the present application, where the server 1300 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1301 and one or more memories 1302, where the memory 1302 stores at least one computer program, and the at least one computer program is loaded and executed by the processors 1301 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is loaded and executed by a processor to implement the operations performed by the image processing method of the foregoing embodiment.
The embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the operations performed by the image processing method of the foregoing embodiment.
In some embodiments, the computer program according to the embodiments of the present application may be deployed to be executed on one computer device or on multiple computer devices located at one site, or may be executed on multiple computer devices distributed at multiple sites and interconnected by a communication network, and the multiple computer devices distributed at the multiple sites and interconnected by the communication network may constitute a block chain system.
It is understood that in the specific implementation of the present application, related data such as user information is involved, when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of related data need to comply with relevant laws and regulations and standards in relevant countries and regions. For example, medical images and the like referred to in this application are acquired with sufficient authorization.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and is not intended to limit the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (15)
1. An image processing method, characterized in that the method comprises:
acquiring an image processing model, wherein the image processing model comprises structure parameters, the structure parameters represent the structure of the metal artifact, the structure parameters comprise first region structure parameters corresponding to a first region and second region structure parameters corresponding to a second region, the first region is any region in the metal artifact, and the second region is a region except the first region in the metal artifact;
when the image processing model is trained, adjusting the first region structure parameter based on a training sample, and adjusting the first region structure parameter based on an angle corresponding to the first region and an angle corresponding to the second region to obtain a second region structure parameter corresponding to the second region;
wherein the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structural parameters.
2. The method of claim 1, wherein the structure parameters include an adjustment coefficient and a plurality of original region structure parameters, and a product of the adjustment coefficient and one of the original region structure parameters represents a region structure parameter corresponding to a region; the adjusting the first region structure parameter based on a training sample when training the image processing model includes:
and when the image processing model is trained, adjusting the adjustment coefficient and a first original region structure parameter corresponding to the first region based on the training sample.
3. The method of claim 2, wherein each of said regions of said metal artifacts comprises at least one bar artifact, and wherein a first original region structure parameter corresponding to said first region is a matrix, said matrix being used to represent said first region;
after the adjusting coefficient and the first original region structure parameter corresponding to the first region are adjusted based on the training sample when the image processing model is trained, the method further includes:
adjusting elements, which are not smaller than a reference value, in the matrix to be target values, wherein the target values indicate that the positions of the elements do not correspond to any sub-region in the first region, the positions of the adjusted matrix, which are except the position of the target value, correspond to the sub-regions in the first region, respectively, and the elements of the positions of the matrix, which are except the position of the target value, indicate whether the corresponding sub-regions in the first region contain the bar artifacts or not, and the shapes of the bar artifacts if the sub-regions contain the bar artifacts.
4. The method according to claim 1, wherein the adjusting the first region structure parameter based on the angle corresponding to the first region and the angle corresponding to the second region to obtain a second region structure parameter corresponding to the second region comprises:
determining a rotation parameter corresponding to the second area, wherein the rotation parameter represents an angle difference between the corresponding second area and the first area;
and adjusting the first area structure parameter based on the rotation parameter to obtain the second area structure parameter.
5. The method of claim 4, wherein the first region structure parameter is a matrix, and elements at positions other than the position of the target value in the matrix indicate whether a corresponding sub-region in the first region contains a bar artifact and a shape of the bar artifact if the sub-region contains the bar artifact, and the adjusting the first region structure parameter based on the rotation parameter to obtain the second region structure parameter comprises:
based on the rotation parameters, adjusting the positions of the elements in the first region structure parameters, so that the elements at the positions other than the position where the target value is located in the obtained second region structure parameters indicate whether the corresponding sub-region in the second region contains a bar artifact or not, and the shape of the bar artifact when the sub-region contains the bar artifact.
6. The method of claim 1, wherein the image processing model further comprises location extraction parameters for extracting location information of metal artifacts from the medical image, and wherein after the acquiring the image processing model, the method further comprises:
adjusting the location extraction parameters based on the training samples while training the image processing model;
the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structure parameters and the adjusted position extraction parameters.
7. The method of claim 6, further comprising:
calling the trained image processing model, and executing the following steps:
based on the position extraction parameters, performing position extraction on the medical image to obtain a plurality of region position information, wherein each region position information represents the position of each region in the metal artifacts contained in the medical image;
constructing first artifact information based on a plurality of region position information, the first region structure parameter and the second region structure parameter;
and removing the artifacts of the medical image based on the first artifact information to obtain the target image.
8. The method of claim 7, wherein after the artifact removal is performed on the medical image based on the first artifact information to obtain the target image, the method further comprises:
respectively determining regional position gradient information corresponding to the plurality of regions by comparing the medical image with the target image, wherein the regional position gradient information indicates the variation amplitude of regional position information;
respectively adjusting the plurality of region position information based on the plurality of region position gradient information to obtain a plurality of adjusted region position information;
determining adjusted first artifact information based on the adjusted plurality of region position information, the first region structure parameter and the second region structure parameter, removing artifacts from the medical image based on the adjusted first artifact information until a target number of target images are obtained, and determining the obtained last target image as the image of the medical image after the metal artifacts are removed.
9. The method according to claim 8, wherein the determining the regional position gradient information corresponding to the plurality of regions respectively by comparing the medical image with the target image comprises:
determining difference information between the medical image and the target image as second artifact information;
and respectively determining region position gradient information corresponding to the plurality of regions based on the first artifact information and the second artifact information.
10. The method of claim 9, wherein the determining region position gradient information corresponding to the plurality of regions based on the first artifact information and the second artifact information, respectively, comprises:
determining difference information between the first artifact information and the second artifact information as artifact difference information;
respectively determining region position gradient information corresponding to the plurality of regions based on the artifact difference information, the first region structure parameter and the second region structure parameter.
11. The method of claim 1, wherein the image processing model comprises a location extraction network and an artifact removal network; the method further comprises the following steps:
calling the position extraction network to extract the position of the medical image to obtain regional position information corresponding to a plurality of regions in the metal artifact;
calling the artifact removing network, determining the first artifact information based on a plurality of region position information, the first region structure parameter and the second region structure parameter, and removing the artifact of the medical image based on the first artifact information to obtain the target image.
12. An image processing apparatus, characterized in that the apparatus comprises:
a model obtaining module, configured to obtain an image processing model, where the image processing model includes structure parameters, where the structure parameters represent a structure of a metal artifact, and the structure parameters include a first region structure parameter corresponding to a first region and a second region structure parameter corresponding to a second region, where the first region is any one of the metal artifacts, and the second region is a region of the metal artifact other than the first region;
the model training module is used for adjusting the first region structure parameters based on a training sample when the image processing model is trained, and adjusting the first region structure parameters based on the angle corresponding to the first region and the angle corresponding to the second region to obtain second region structure parameters corresponding to the second region;
wherein the trained image processing model is used for removing metal artifacts in any medical image based on the adjusted structural parameters.
13. A computer device, characterized in that the computer device comprises a processor and a memory, in which at least one computer program is stored, which is loaded and executed by the processor to perform the operations performed by the image processing method according to any of claims 1 to 11.
14. A computer-readable storage medium, having stored therein at least one computer program, which is loaded and executed by a processor, to perform operations performed by the image processing method of any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, is adapted to carry out the operations of the image processing method of any of the claims 1 to 11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210409315.6A CN115115724A (en) | 2022-04-19 | 2022-04-19 | Image processing method, image processing device, computer equipment and storage medium |
PCT/CN2023/081924 WO2023202285A1 (en) | 2022-04-19 | 2023-03-16 | Image processing method and apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210409315.6A CN115115724A (en) | 2022-04-19 | 2022-04-19 | Image processing method, image processing device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115115724A true CN115115724A (en) | 2022-09-27 |
Family
ID=83325461
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210409315.6A Pending CN115115724A (en) | 2022-04-19 | 2022-04-19 | Image processing method, image processing device, computer equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115115724A (en) |
WO (1) | WO2023202285A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116228916A (en) * | 2023-05-10 | 2023-06-06 | 中日友好医院(中日友好临床医学研究所) | Image metal artifact removal method, system and equipment |
WO2023202285A1 (en) * | 2022-04-19 | 2023-10-26 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, computer device, and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106886982A (en) * | 2017-02-20 | 2017-06-23 | 江苏美伦影像系统有限公司 | CBCT image annular artifact minimizing technologies |
KR102414094B1 (en) * | 2020-08-18 | 2022-06-27 | 연세대학교 산학협력단 | Method and Device for Correcting Metal Artifacts in CT Images |
CN113256529B (en) * | 2021-06-09 | 2021-10-15 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN115115724A (en) * | 2022-04-19 | 2022-09-27 | 腾讯医疗健康(深圳)有限公司 | Image processing method, image processing device, computer equipment and storage medium |
-
2022
- 2022-04-19 CN CN202210409315.6A patent/CN115115724A/en active Pending
-
2023
- 2023-03-16 WO PCT/CN2023/081924 patent/WO2023202285A1/en unknown
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023202285A1 (en) * | 2022-04-19 | 2023-10-26 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, computer device, and storage medium |
CN116228916A (en) * | 2023-05-10 | 2023-06-06 | 中日友好医院(中日友好临床医学研究所) | Image metal artifact removal method, system and equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2023202285A1 (en) | 2023-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111091166B (en) | Image processing model training method, image processing device, and storage medium | |
CN112308200B (en) | Searching method and device for neural network | |
US20230081645A1 (en) | Detecting forged facial images using frequency domain information and local correlation | |
CN109978936B (en) | Disparity map acquisition method and device, storage medium and equipment | |
US20210343041A1 (en) | Method and apparatus for obtaining position of target, computer device, and storage medium | |
CN113256529B (en) | Image processing method, image processing device, computer equipment and storage medium | |
WO2022134971A1 (en) | Noise reduction model training method and related apparatus | |
CN111091576A (en) | Image segmentation method, device, equipment and storage medium | |
CN115115724A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN112381707B (en) | Image generation method, device, equipment and storage medium | |
CN114332530A (en) | Image classification method and device, computer equipment and storage medium | |
CN114283050A (en) | Image processing method, device, equipment and storage medium | |
CN112990053B (en) | Image processing method, device, equipment and storage medium | |
CN114283299A (en) | Image clustering method and device, computer equipment and storage medium | |
CN115131199A (en) | Training method of image generation model, image generation method, device and equipment | |
CN114677350B (en) | Connection point extraction method, device, computer equipment and storage medium | |
CN113570645A (en) | Image registration method, image registration device, computer equipment and medium | |
CN115131194A (en) | Method for determining image synthesis model and related device | |
CN115170896A (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN113570510A (en) | Image processing method, device, equipment and storage medium | |
CN115689947B (en) | Image sharpening method, system, electronic device and storage medium | |
CN116704200A (en) | Image feature extraction and image noise reduction method and related device | |
CN114419517B (en) | Video frame processing method, device, computer equipment and storage medium | |
CN116048763A (en) | Task processing method and device based on BEV multitasking model framework | |
CN113743186B (en) | Medical image processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |