CN112365413A - Image processing method, device, equipment, system and computer readable storage medium - Google Patents

Image processing method, device, equipment, system and computer readable storage medium Download PDF

Info

Publication number
CN112365413A
CN112365413A CN202011193184.XA CN202011193184A CN112365413A CN 112365413 A CN112365413 A CN 112365413A CN 202011193184 A CN202011193184 A CN 202011193184A CN 112365413 A CN112365413 A CN 112365413A
Authority
CN
China
Prior art keywords
gray value
total
original
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011193184.XA
Other languages
Chinese (zh)
Inventor
李炳轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Ruishi Digital Medical Imaging Technology Co ltd
Original Assignee
Hubei Ruishi Digital Medical Imaging Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ruishi Digital Medical Imaging Technology Co ltd filed Critical Hubei Ruishi Digital Medical Imaging Technology Co ltd
Priority to CN202011193184.XA priority Critical patent/CN112365413A/en
Publication of CN112365413A publication Critical patent/CN112365413A/en
Priority to PCT/CN2021/118707 priority patent/WO2022089079A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Abstract

The embodiment of the application discloses an image processing method, an image processing device, image processing equipment, an image processing system and a computer readable storage medium, wherein the method comprises the following steps: dividing the acquired original image into a plurality of areas according to a preset rule; calculating the total change rate of the original gray value of each region; and judging whether the total change rate is greater than a preset value, calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be greater than or equal to the preset value, and calling the constructed second noise reduction model to perform second-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be less than the preset value. By utilizing the technical scheme provided by the embodiment of the application, the purpose of improving the spatial resolution of the image while removing the noise in the image can be realized.

Description

Image processing method, device, equipment, system and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, an image processing system, and a computer-readable storage medium.
Background
The quality of medical images such as Positron Emission Tomography (PET) and Computed Tomography (CT) has a decisive influence on the accuracy of clinical diagnosis. However, in medical imaging procedures, noise may be introduced into the medical image due to factors such as background radiation, data quality, reconstruction algorithms, etc. The introduced noise can cause image distortion and loss of quantitative accuracy of the image on one hand, and can also submerge detailed information of the image and reduce the spatial resolution of the image on the other hand.
In order to remove noise in medical images such as PET images or CT images, filtering methods such as mean filtering, median filtering, and gaussian filtering are generally used. Although these filtering methods can eliminate high-frequency noise in most medical images, they bring about deterioration of spatial resolution, and therefore, these filtering methods are not ideal noise reduction methods.
Therefore, there is a need for a new image processing method to remove noise in medical images or other types of images and to improve the spatial resolution of the images.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an image processing device, an image processing system, and a computer-readable storage medium, so as to solve at least one problem in the prior art.
To solve the above technical problem, an embodiment of the present application provides an image processing method, which may include:
dividing the acquired original image into a plurality of areas according to a preset rule;
calculating the total change rate of the original gray value of each region;
and judging whether the total change rate is greater than a preset value, calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be greater than or equal to the preset value, and calling the constructed second noise reduction model to perform second-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be less than the preset value.
Optionally, the step of dividing the original image into a plurality of regions according to a preset rule includes:
dividing the original image into a plurality of regions according to the constituent elements, physical attributes or orientations of the original image.
Optionally, the step of calculating the total rate of change of the raw gray-scale values of each of the regions comprises:
for each region, calculating the difference value between the original gray values of every two adjacent pixel points in the region in sequence;
determining the change rate of the original gray values of two adjacent pixel points according to the difference value of the original gray values and the coordinate difference value of every two adjacent pixel points;
and determining the total change rate of the original gray values of the region according to the obtained change rate of the original gray values of all the pixel points in the region.
Optionally, the step of determining the total rate of change of the raw grayscale values for the region comprises:
determining the average value of the change rates of the original gray values of all the pixel points in the region as the total change rate of the original gray values of the region; or
And performing mean square error operation on the change rate of the original gray values of all the pixel points in the region to obtain the total change rate of the original gray values of the region.
Optionally, the step of calculating the total rate of change of the raw gray-scale values of each of the regions further comprises:
counting the change rule of the original gray value of all the pixel points in each region, and determining the slope of the linear change of the original gray value as the total change rate of the original gray value of the region when the original gray value is linearly changed.
Optionally, when the original image is a two-dimensional image, the step of calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value includes:
sequentially carrying out first-order gradient processing on the original gray values of all pixel points in all the regions which meet the condition that the total change rate of the original gray values is greater than or equal to the preset value according to the following formula:
Figure BDA0002753297750000021
judging whether the total gray value of each pixel point obtained after processing is larger than a first preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the first preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
f′x,y=fx,y+k×[(fx,y-fx-1,y)+(fx,y-fx,y-1)],
when the total gray value is judged to be smaller than the first preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
f′x,y=fx,y-(fx,y-fx-1,y)-(fx,y-fx,y-1),
wherein f isx,y、fx-1,yAnd fx,y-1Respectively representing the original gray values of the pixel points at the coordinates (x, y), (x-1, y) and (x, y-1);
Figure BDA0002753297750000031
representing the total gray value of the pixel points located at the coordinates (x, y) obtained after the processing; f'x,yRepresenting the final gray value of the pixel point located at coordinate (x, y); x and y are both natural numbers; and k is an image enhancement coefficient.
Optionally, when the original image is a three-dimensional image, the step of calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value includes:
sequentially carrying out first-order gradient processing on the original gray values of all pixel points in all the regions which meet the condition that the total change rate of the original gray values is greater than or equal to the preset value according to the following formula:
Figure BDA0002753297750000032
judging whether the total gray value of each pixel point obtained after processing is larger than a first preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the first preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
f′x,y,z=fx,y,z+k×[(fx,y,z-fx-1,y,z)+(fx,y,z-fx,y-1,z)+(fx,y,z-fx,y,z-1)]when the total gray value is judged to be smaller than the first preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
f′x,y,z=fx,y,z-(fx,y,z-fx-1,y,z)-(fx,y,z-fx,y-1,z)-(fx,y,z-fx,y,z-1),
wherein f isx,y,z、fx-1,y,z、fx,y-1,zAnd fx,y,z-1Respectively representing the original gray values of the pixel points at the coordinates (x, y, z), (x-1, y, z), (x, y-1, z) and (x, y, z-1);
Figure BDA0002753297750000033
representing the total gray value of the pixel point located at the coordinate (x, y, z) obtained after the processing; f'x,y,zRepresenting the final gray value of the pixel point located at coordinate (x, y, z); x, y and z are all natural numbers; and k is an image enhancement coefficient.
Optionally, when the original image is a two-dimensional image, the step of calling the second noise reduction model to perform second-order total variation processing on the original gray value includes:
and sequentially carrying out second-order gradient processing on the original gray values of all the pixel points in all the areas which meet the condition that the total change rate of the original gray values is smaller than the preset value according to the following formula:
Figure BDA0002753297750000034
Figure BDA0002753297750000035
Figure BDA0002753297750000036
judging whether the total gray value of each pixel point obtained after processing is larger than a second preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the second preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
Figure BDA0002753297750000041
when the total gray value is judged to be smaller than the second preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
Figure BDA0002753297750000042
wherein f isx,y、fx-1,y、fx+1,y、fx,y-1And fx,y+1Respectively representing the original gray values of the pixel points at the coordinates (x, y), (x-1, y), (x +1, y), (x, y-1) and (x, y + 1);
Figure BDA0002753297750000043
and
Figure 3
respectively representing the gray values of the pixel points located at the coordinates (x, y) in the directions of the x axis and the y axis obtained after the processing;
Figure BDA0002753297750000045
the total gray value of the pixel points located at the coordinates (x, y) is obtained after the processing; f'x,yRepresenting the final gray value of the pixel point located at coordinate (x, y); x and y are both natural numbers; and k is an image enhancement coefficient.
Optionally, when the original image is a three-dimensional image, the step of calling the second noise reduction model to perform second-order total variation processing on the original gray value includes:
and sequentially carrying out second-order gradient processing on the original gray values of all the pixel points in all the areas which meet the condition that the total change rate of the original gray values is smaller than the preset value according to the following formula:
Figure BDA0002753297750000046
Figure BDA0002753297750000047
Figure BDA0002753297750000048
Figure BDA0002753297750000049
judging whether the total gray value of each pixel point obtained after processing is larger than a second preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the second preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
Figure BDA00027532977500000410
when the total gray value is judged to be smaller than the second preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
Figure BDA00027532977500000411
wherein f isx,y,z、fx-1,y,z、fx+1,y-1,z、fx,y-1,z、fx,y+1,z、fx,t,z-1And fx,y,z+1Respectively representing the original gray values of the pixel points at coordinates (x, y, z), (x-1, y, z), (x +1, y, z), (x, y-1, z), (x, y +1, z) (x, y, z-1) and (x, y, z + 1);
Figure BDA0002753297750000051
and
Figure 4
respectively representing the gray values of the pixel points located at the coordinates (x, y, z) in the directions of the x axis, the y axis and the z axis obtained after the processing;
Figure BDA0002753297750000053
the total gray value of the pixel points located at the coordinates (x, y, z) is obtained after the processing; f'x,y,zRepresenting the final gray value of the pixel point located at coordinate (x, y, z); x, y and z are all natural numbers; and k is an image enhancement coefficient.
Optionally, the raw image comprises a CT image, an MRI image, a PET-CT image or an ultrasound image.
An embodiment of the present application further provides an image processing apparatus, which may include:
a dividing unit configured to divide the acquired original image into a plurality of regions according to a preset rule;
a calculation unit configured to calculate a total rate of change of the original gradation value of each of the regions;
and the processing unit is configured to judge whether the total change rate is greater than a preset value, call the constructed first noise reduction model to perform first-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be greater than or equal to the preset value, and call the constructed second noise reduction model to perform second-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be less than the preset value.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, which, when executed, can implement the above-mentioned image processing method.
The embodiment of the application also provides a computer device which can comprise a processor and a memory, wherein the memory is stored with a computer program, and when the computer program is executed, the processor executes the image processing method.
The embodiment of the application also provides an image processing system, which can comprise the computer equipment and the scanning equipment connected with the computer equipment.
Optionally, the scanning device comprises a CT scanner, an MRI scanner, a PET detector, a PET-CT device or an ultrasound device.
According to the technical scheme provided by the embodiment of the application, the original image is divided into a plurality of areas, the total change rate of the original gray value of each area is calculated, and the first noise reduction model is selected to perform first-order total variation processing on the original gray value or the second noise reduction model is selected to perform second-order total variation processing on the original gray value according to the size relation between the total change rate of the original gray value of each area and the preset value, so that the purpose of removing noise in the image and improving the spatial resolution of the image can be achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a diagram of an application environment of an image processing method in an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 4 is a schematic block diagram of a computer device provided by an embodiment of the present application;
FIG. 5 is a schematic block diagram of a computer device provided in another embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only used for explaining a part of the embodiments of the present application, but not all embodiments, and are not intended to limit the scope of the present application or the claims. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected/coupled" to another element, it can be directly connected/coupled to the other element or intervening elements may also be present. The term "connected/coupled" as used herein may include electrical and/or mechanical physical connections/couplings. The term "comprises/comprising" as used herein refers to the presence of features, steps or elements, but does not preclude the presence or addition of one or more other features, steps or elements. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In addition, in the description of the present application, the terms "first", "second", "third", and the like are used for descriptive purposes only and to distinguish similar objects, and there is no order of precedence between the two, and no indication or implication of relative importance is to be inferred. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
The image processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The method may be applied to a computer device. The computer device includes a terminal 1000 and a server 2000 connected through a network. The method may be executed in the terminal 1000 or the server 2000, for example, the terminal 1000 may directly acquire an original image of a target object (e.g., a person or a pet, etc.) from a scanning device or a photographing device, and execute an image processing method on the terminal side; alternatively, the terminal 1000 can also transmit an original image of the target object to the server 2000 after acquiring the original image, so that the server 2000 can obtain the original image of the target object and perform the image processing method. Terminal 1000 can be specifically a desktop terminal (e.g., a desktop computer) or a mobile terminal (e.g., a laptop, a tablet, a cell phone, or a personal digital assistant, etc.). The server 2000 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, and the image processing apparatus may be implemented as a part or all of the terminal or the server by software, hardware, or a combination of software and hardware.
The image processing method provided by the present application is described in detail below with reference to specific embodiments.
Fig. 2 is an image processing method provided in an embodiment of the present application, which may be executed by an image processing apparatus and may include the steps of:
s1: and dividing the acquired original image into a plurality of areas according to a preset rule.
The original image may be a medical image such as a CT image, a PET-CT image, a Magnetic Resonance Imaging (MRI) image, or an ultrasound image, or may be any other type of image.
After the original image to be processed is acquired, the original image may be divided into a plurality of regions according to a preset rule. For example, the original image may be divided into a plurality of regions in accordance with constituent elements of the original image, for example, for a human body image, it may be divided into a head region, an upper body region, a lower body region, and the like; the original image can be divided into a plurality of areas according to the physical properties of the original image, such as brightness, contrast and/or saturation (or gray value); the original image may also be divided into two regions, i.e., an upper region and a lower region, or four regions, i.e., an upper region, a lower region, a left region, and a right region, according to the orientation, but is not limited thereto.
S2: the total rate of change of the raw gray values for each region is calculated.
The total change rate is mainly used for indicating the overall change rule of the original gray values of all the pixel points in the area. The preset value can be set according to empirical data or application scenarios, and is generally small in value.
After the original image is divided into a plurality of regions, the total change rate of the original gray-scale value of each region may be sequentially calculated and judged whether it is greater than a preset value. Specifically, for each region, the difference between the original gray scale values of every two adjacent pixels may be sequentially calculated, the change rate of the original gray scale values of the two pixels is determined according to the obtained difference between the original gray scale values of every two adjacent pixels and the coordinate difference thereof, and the total change rate of the original gray scale values of the region is determined according to all the obtained change rates of the original gray scale values of all the pixels. For example, the average value of the change rates of the original gray scale values of all the pixel points may be determined as the total change rate of the original gray scale value of the region, or the mean square error operation may also be performed on the change rates of the original gray scale values of all the pixel points to obtain the total change rate of the original gray scale value of the region, but is not limited to the above manner. In addition, for each region, the total change rate of the original gray values of the region can also be determined by counting the change rule of the original gray values of all the pixel points. For example, when the original gray values of all the pixels are linearly changed (preferably, smoothly linearly changed), the slope of the linear change can be determined as the total change rate. At this time, it can also be understood that the change rates of the original gray values of all the pixels are equal, and the total change rate is the change rate.
S3: and judging whether the total change rate of the original gray value of each region is greater than a preset value, calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value when the change rate of the original gray value is judged to be greater than the preset value, and calling the constructed second noise reduction model to perform second-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be less than the preset value.
The first noise reduction model and the second noise reduction model may respectively refer to models capable of performing first-order fully-variant processing and second-order fully-variant processing on various image data (including medical images).
After the total change rate of the original gray value of each region is calculated, whether the total change rate of the original gray value of each region is greater than a preset value or not can be sequentially judged, when the change rate of the original gray value is judged to be greater than the preset value, the constructed first noise reduction model can be called to perform first-order total variation processing on the original gray value, and when the total change rate of the original gray value is judged to be less than the preset value, the constructed second noise reduction model is called to perform second-order total variation processing on the original gray value. The first preset threshold may be greater than 0 and smaller than the maximum value of the original gray-scale values in all regions satisfying the condition that the total rate of change of the original gray-scale values is greater than or equal to the preset value, and the second preset threshold may be greater than 0 and smaller than the maximum value of the original gray-scale values in all regions satisfying the condition that the total rate of change of the gray-scale values is smaller than the preset value.
The step of calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value may be performed according to the following procedure:
when the original image is a two-dimensional image, performing first-order gradient processing on the original gray values of all pixel points in all regions meeting the condition that the total change rate of the original gray values is greater than or equal to a preset value according to the following formula (1); judging whether the total gray value of each pixel point obtained after processing is larger than a first preset threshold value or not for each area meeting the condition; for each pixel point, when the total gray value of each pixel point is judged to be greater than or equal to a first preset threshold, the total gray value obtained after processing is determined as the final gray value of the pixel point, or the final gray value of the pixel point is calculated according to the following formula (2), and when the total gray value obtained after processing is judged to be less than the first preset threshold, the total gray value obtained after processing is calculated according to the following formula (3) to obtain the final gray value of the pixel point.
Figure BDA0002753297750000081
f′x,y=fx,y+k×[(fx,y-fx-1,y)+(fx,y-fx,y-1)] (2)
f′x,y=fx,y-(fx,y-fx-1,y)-(fx,y-fx,y-1) (3)
Wherein f isx,y、fx-1,yAnd fx,y-1Respectively representing the original gray values of the pixel points at the coordinates (x, y), (x-1, y) and (x, y-1);
Figure BDA0002753297750000082
representing the sum of pixel points at coordinates (x, y) obtained after first-order gradient processingGray value; f'x,yRepresenting the final gray value of the pixel point located at coordinate (x, y); x and y are both natural numbers; k is an image enhancement coefficient, which is typically 10-5~105And (4) the following steps.
When the original image is a three-dimensional image, performing first-order gradient processing on the original gray values of all pixel points in all regions meeting the condition that the total change rate of the original gray values is greater than or equal to a preset value according to the following formula (4); judging whether the total gray value of each pixel point obtained after processing is larger than a first preset threshold value or not for each area meeting the condition; for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to a first preset threshold, the total gray value obtained after the processing is determined as the final gray value of the pixel point, or the final gray value of the pixel point is calculated according to the following formula (5), and when the total gray value obtained after the processing is judged to be less than the first preset threshold, the total gray value obtained after the processing is calculated according to the following formula (6) to obtain the final gray value of the pixel point.
Figure BDA0002753297750000091
f′x,y,z=fx,y,z+k×[(fx,y,z-fx-1,y,z)+(fx,y,z-fx,y-1,z)+(fx,y,z-fx,y,z-1)] (5)
f′x,y,z=fx,y,z-(fx,y,z-fx-1,y,z)-(fx,y,z-fx,y-1,z)-(fx,y,z-fx,y,z-1) (6)
Wherein f isx,y,z、fx-1,y,z、fx,y-1,zAnd fx,y,z-1Respectively representing the original gray values of the pixel points at the coordinates (x, y, z), (x-1, y, z), (x, y-1, z) and (x, y, z-1);
Figure BDA0002753297750000092
representing the position at coordinates (x, y, z) obtained after a first-order gradient processThe total gray value of the pixel points; f'x,y,zRepresenting the final gray value of the pixel point located at coordinate (x, y, z); x, y and z are all natural numbers.
The first-order total variation processing is carried out on the original gray value of the region with larger gray value variation by calling the constructed first noise reduction model, so that the noise in the region can be removed, and the spatial resolution can be improved compared with methods such as mid-pass filtering, mean filtering, Gaussian filtering and the like in the prior art. Furthermore, by determining the final gray value of the pixel point according to the above formula (2) or (5), it is possible to improve the contrast and resolution of the image.
The step of calling the constructed second noise reduction model to perform second-order total variation processing on the original gray value may be performed according to the following procedure:
when the original image is a two-dimensional image, the second-order gradient processing can be sequentially performed on the original gray values of all pixel points in all the regions meeting the condition that the total change rate of the original gray values is smaller than the preset value according to the following formula (7); judging whether the total gray value of each pixel point obtained after processing is larger than a second preset threshold value or not for each area meeting the condition; for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to a second preset threshold, the total gray value obtained after the processing is determined as the final gray value of the pixel point, or the final gray value of the pixel point is calculated according to the following formula (8), and when the total gray value obtained after the processing is judged to be less than the second preset threshold, the total gray value obtained after the processing is calculated according to the following formula (9) to obtain the final gray value of the pixel point.
Figure BDA0002753297750000093
Figure BDA0002753297750000094
Figure BDA0002753297750000101
Wherein f isx,y、fx-1,y、fx+1,y、fx,y-1And fx,y+1Respectively representing the original gray values of the pixel points at the coordinates (x, y), (x-1, y), (x +1, y), (x, y-1) and (x, y + 1);
Figure BDA0002753297750000102
and
Figure BDA0002753297750000103
respectively representing gray values of pixel points located at coordinates (x, y) in the directions of an x axis and a y axis, which are obtained after second-order gradient processing;
Figure BDA0002753297750000104
the total gray value of the pixel point located at the coordinate (x, y) is obtained after the second-order gradient processing; f'x,yRepresenting the final gray value of the pixel point located at coordinate (x, y).
When the original image is a three-dimensional image, the second-order gradient processing can be sequentially performed on the original gray values of all pixel points in all the regions meeting the condition that the total change rate of the original gray values is smaller than the preset value according to the following formula (10); judging whether the total gray value of each pixel point obtained after processing is larger than a second preset threshold value or not for each area meeting the condition; for each pixel point, when the total gray value is judged to be greater than or equal to a second preset threshold, the total gray value is determined as the final gray value of the pixel point, or the final gray value of the pixel point is calculated according to the following formula (11), and when the total gray value is judged to be smaller than the second preset threshold, the total gray value obtained after processing is calculated according to the following formula (12) to obtain the final gray value of the pixel point.
Figure BDA0002753297750000105
Figure BDA0002753297750000106
Figure BDA0002753297750000107
Wherein f isx,y,z、fx-1,y,z、fx+1,y-1,z、fx,y-1,z、fx,y+1,z、fx,y,z-1And fx,y,z+1Respectively representing the original gray values of the pixel points at coordinates (x, y, z), (x-1, y, z), (x +1, y, z), (x, y-1, z), (x, y +1, z) (x, y, z-1) and (x, y, z + 1);
Figure BDA0002753297750000108
and
Figure 5
respectively representing gray values of pixel points located at coordinates (x, y, z) in x-axis, y-axis and z-axis directions, which are obtained after second-order gradient processing;
Figure BDA00027532977500001010
the total gray value of the pixel point at the coordinate (x, y, z) is obtained after the second-order gradient processing; f'x,y,zRepresenting the final gray value of the pixel point located at coordinate (x, y, z).
Aiming at the condition that the gray value of the image is in smooth linear change and the slope of the linear change is small, the second-order total variation processing is carried out on the original gray value by calling the second noise reduction model, so that the spatial resolution can be improved, and the truth of the image can be ensured. This is because when the original gray value changes linearly, the gray value of the pixel in each direction is 0 after the second-order gradient processing, so that the final gray value of the pixel is equal to the original gray value. In addition, by determining the final gray value of the pixel point according to the above formula (8) or (11), it is possible to improve the contrast and resolution of the image.
It should be noted that the coordinates (x-1, y), (x +1, y), (x, y-1), and (x, y +1) in the above formula merely represent the positional relationship with the coordinates (x, y), and may not refer to specific coordinate values. For example, the pixel points located at the coordinates (x-1, y) and (x +1, y) are adjacent to the pixel point located at the coordinate (x, y) in the x-axis direction, and the pixel points located at the coordinates (x, y-1) and (x, y +1) are adjacent to the pixel point located at the coordinate (x, y) in the y-axis direction. In addition, the coordinates (x-1, y, z), (x +1, y, z), (x, y-1, z), (x, y +1, z) (x, y, z-1), and (x, y, z +1) in the above formula also represent only the positional relationship with the coordinates (x, y, z).
As can be seen from the above description, the embodiment of the present application calculates the total change rate of the original gray-scale values of each region by dividing the original image into a plurality of regions, and selects the first noise reduction model to perform the first-order total variation processing on the original gray-scale values or selects the second noise reduction model to perform the second-order total variation processing on the original gray-scale values according to the magnitude relationship between the total change rate of the original gray-scale values of each region and the preset value, and uses the comparison between the first-order gradient processing result and the first preset threshold value or the comparison between the second-order gradient result and the second preset threshold value in the total variation processing, which enables to remove noise at positions where the gray-scale values are lower than the first preset threshold value or the second preset threshold value, and to keep and enhance the original gradient at positions higher than the first preset threshold value or the second preset threshold value, so that the position gradient of the gray-scale values is higher, the gradient at the place with low gray value is lower, thus enhancing the contrast and improving the resolution, thereby realizing the purpose of improving the spatial resolution while removing the noise in the image.
An embodiment of the present application further provides an image processing apparatus, as shown in fig. 3, which may include:
a dividing unit 310 configured to divide the acquired original image into a plurality of regions according to a preset rule;
a calculating unit 320 configured to calculate a total rate of change of the original gradation value of each region;
and the processing unit 330 is configured to determine whether the total change rate is greater than a preset value, call the constructed first noise reduction model to perform first-order total variation processing on the original gray value when the total change rate of the original gray value is determined to be greater than or equal to the preset value, and call the constructed second noise reduction model to perform second-order total variation processing on the original gray value when the total change rate of the original gray value is determined to be less than the preset value.
For a detailed description of the above-mentioned units, reference may be made to the corresponding description in the method embodiments, which are not to be repeated here.
By using the image processing apparatus, it is possible to realize removal of noise in an image and to improve spatial resolution thereof.
FIG. 4 shows a schematic diagram of a computer device in one embodiment. The computer device may specifically be the terminal 1000 in fig. 1. As shown in fig. 4, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display, which are connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device may store an operating system and may also store a computer program that, when executed by the processor, causes the processor to perform the image processing method described in the above embodiments. The internal memory may also store a computer program, which when executed by the processor, performs the image processing method described in the above embodiments.
Fig. 5 shows a schematic structural diagram of a computer device in another embodiment. The computer device may specifically be the server 2000 in fig. 1. As shown in fig. 5, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to perform the image processing method described in the above embodiments. The internal memory may also store a computer program, which when executed by the processor, performs the image processing method described in the above embodiments.
Those skilled in the art will appreciate that the configurations shown in fig. 4 and 5 are merely block diagrams of some configurations related to the present application, and do not constitute a limitation on the computing devices to which the present application is applied, and that a particular computing device may include more or less components than those shown, or combine certain components, or have a different configuration of components.
In one embodiment, as shown in fig. 6, the present application also provides an image processing system, which may include the computer device of fig. 4 or 5 and a scanning device connected thereto, which may be used to obtain an original image by scanning a target object and provide the obtained original image to the computer device. The scanning device may be any device capable of detecting radioactive rays, and may include, for example, but not limited to, a CT scanner, an MRI scanner, a PET detector, a PET-CT device, an ultrasound device, or the like.
In one embodiment, the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program can implement the corresponding functions described in the above method embodiments when executed. The computer program may also be run on a computer device as shown in fig. 4 or fig. 5. The memory of the computer device contains various program modules constituting the apparatus, and a computer program constituted by the various program modules is capable of realizing the functions corresponding to the respective steps in the image segmentation method described in the above-described embodiments when executed.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage media, databases, or other media used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The systems, devices, apparatuses, units and the like set forth in the above embodiments may be specifically implemented by semiconductor chips, computer chips and/or entities, or implemented by products with certain functions. For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same or multiple chips when implementing the present application.
Although the present application provides method steps as described in the above embodiments or flowcharts, additional or fewer steps may be included in the method, based on conventional or non-inventive efforts. In the case of steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In addition, the technical features of the above embodiments may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The embodiments described above are described in order to enable those skilled in the art to understand and use the present application. It will be readily apparent to those skilled in the art that various modifications to these embodiments may be made, and the generic principles described herein may be applied to other embodiments without the use of the inventive faculty. Therefore, the present application is not limited to the above embodiments, and those skilled in the art should make improvements and modifications within the scope of the present application based on the disclosure of the present application.

Claims (15)

1. An image processing method, characterized in that the method comprises:
dividing the acquired original image into a plurality of areas according to a preset rule;
calculating the total change rate of the original gray value of each region;
and judging whether the total change rate of the original gray value is greater than a preset value, calling the constructed first noise reduction model to perform first-order total variation processing on the original gray value when judging that the total change rate of the original gray value is greater than or equal to the preset value, and calling the constructed second noise reduction model to perform second-order total variation processing on the original gray value when judging that the total change rate of the original gray value is less than the preset value.
2. The method of claim 1, wherein the step of dividing the original image into a plurality of regions according to a preset rule comprises:
dividing the original image into a plurality of regions according to the constituent elements, physical attributes or orientations of the original image.
3. The method of claim 1, wherein the step of calculating the total rate of change of the raw gray scale values for each of the regions comprises:
for each region, calculating the difference value between the original gray values of every two adjacent pixel points in the region in sequence;
determining the change rate of the original gray values of two adjacent pixel points according to the difference value of the original gray values and the coordinate difference value of every two adjacent pixel points;
and determining the total change rate of the original gray values of the region according to the obtained change rate of the original gray values of all the pixel points in the region.
4. The method of claim 3, wherein the step of determining a total rate of change of the raw grayscale values for the region comprises:
determining the average value of the change rates of the original gray values of all the pixel points in the region as the total change rate of the original gray values of the region; or
And performing mean square error operation on the change rate of the original gray values of all the pixel points in the region to obtain the total change rate of the original gray values of the region.
5. The method of claim 1, wherein the step of calculating the total rate of change of the raw gray scale values for each of the regions further comprises:
counting the change rule of the original gray value of all the pixel points in each region, and determining the slope of the linear change of the original gray value as the total change rate of the original gray value of the region when the original gray value is linearly changed.
6. The method according to claim 1, wherein when the original image is a two-dimensional image, the step of calling the constructed first noise reduction model to perform first-order total variation processing on the original gray-scale value comprises:
sequentially carrying out first-order gradient processing on the original gray values of all pixel points in all the regions which meet the condition that the total change rate of the original gray values is greater than or equal to the preset value according to the following formula:
Figure FDA0002753297740000021
judging whether the total gray value of each pixel point obtained after processing is larger than a first preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the first preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
f′x,y=fx,y+k×[(fx,y-fx-1,y)+(fx,y-fx,y-1)],
when the total gray value is judged to be smaller than the first preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
f′x,y=fx,y-(fx,y-fx-1,y)-(fx,y-fx,y-1),
wherein f isx,y、fx-1,yAnd fx,y-1Respectively representing the original gray values of the pixel points at the coordinates (x, y), (x-1, y) and (x, y-1);
Figure FDA0002753297740000022
representing the total gray value of the pixel points located at the coordinates (x, y) obtained after the processing; f'x,yRepresenting the final gray value of the pixel point located at coordinate (x, y); x and y are both natural numbers; and k is an image enhancement coefficient.
7. The method according to claim 1, wherein when the original image is a three-dimensional image, the step of calling the constructed first noise reduction model to perform first-order total variation processing on the original gray-scale value comprises:
sequentially carrying out first-order gradient processing on the original gray values of all pixel points in all the regions which meet the condition that the total change rate of the original gray values is greater than or equal to the preset value according to the following formula:
Figure FDA0002753297740000023
judging whether the total gray value of each pixel point obtained after processing is larger than a first preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the first preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
f′x,y,z=fx,y,z+k×[(fx,y,z-fx-1,y,z)+(fx,y,z-fx,y-1,z)+(fx,y,z-fx,y,z-1)],
when the total gray value is judged to be smaller than the first preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
f′x,y,z=fx,y,z-(fx,y,z-fx-1,y,z)-(fx,y,z-fx,y-1,z)-(fx,y,z-fx,y,z-1),
wherein f isx,y,z、fx-1,y,z、fx,y-1,zAnd fx,y,z-1Respectively representing the original gray values of the pixel points at the coordinates (x, y, z), (x-1, y, z), (x, y-1, z) and (x, y, z-1);
Figure FDA0002753297740000031
representing the total gray value of the pixel point located at the coordinate (x, y, z) obtained after the processing; f'x,y,zRepresenting the final gray value of the pixel point located at coordinate (x, y, z); x, y and z are all natural numbers; and k is an image enhancement coefficient.
8. The method according to claim 1, wherein when the original image is a two-dimensional image, the step of invoking the second noise reduction model to perform second-order total variation processing on the original gray value comprises:
and sequentially carrying out second-order gradient processing on the original gray values of all the pixel points in all the areas which meet the condition that the total change rate of the original gray values is smaller than the preset value according to the following formula:
Figure FDA0002753297740000032
Figure FDA0002753297740000033
Figure FDA0002753297740000034
judging whether the total gray value of each pixel point obtained after processing is larger than a second preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the second preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
Figure FDA0002753297740000035
when the total gray value is judged to be smaller than the second preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
Figure FDA0002753297740000036
wherein f isx,y、fx-1,y、fx+1,y、fx,y-1And fx,y+1Respectively, at coordinates (x, y), (x-1, y),The original gray values of the pixel points at (x +1, y), (x, y-1) and (x, y + 1);
Figure FDA0002753297740000037
and
Figure FDA0002753297740000038
respectively representing the gray values of the pixel points located at the coordinates (x, Y) in the directions of the x axis and the Y axis obtained after the processing;
Figure FDA0002753297740000039
the total gray value of the pixel points located at the coordinates (x, Y) is obtained after the processing; f'x,yRepresenting the final gray value of the pixel point located at coordinate (x, y); x and y are both natural numbers; and k is an image enhancement coefficient.
9. The method according to claim 1, wherein when the original image is a three-dimensional image, the step of calling the second noise reduction model to perform second-order total variation processing on the original gray value comprises:
and sequentially carrying out second-order gradient processing on the original gray values of all the pixel points in all the areas which meet the condition that the total change rate of the original gray values is smaller than the preset value according to the following formula:
Figure FDA00027532977400000310
Figure FDA00027532977400000311
Figure FDA0002753297740000041
Figure FDA0002753297740000042
judging whether the total gray value of each pixel point obtained after processing is larger than a second preset threshold value or not aiming at each region;
for each pixel point, when the total gray value obtained after the processing is judged to be greater than or equal to the second preset threshold, determining the total gray value as the final gray value of the pixel point, or calculating the final gray value of the pixel point according to the following formula:
Figure FDA0002753297740000043
when the total gray value is judged to be smaller than the second preset threshold, calculating the total gray value according to the following formula to obtain the final gray value of the pixel point:
Figure FDA0002753297740000044
wherein f isx,y,z、fx-1,y,z、fx+1,y-1,z、fx,y-1,z、fx,y+1,z、fx,y,z-1And fx,y,z+1Respectively representing the original gray values of the pixel points at coordinates (x, y, z), (x-1, y, z), (x +1, y, z), (x, y-1, z), (x, y +1, z) (x, y, z-1) and (x, y, z + 1);
Figure FDA0002753297740000045
and
Figure FDA0002753297740000046
respectively representing the gray values of the pixel points located at the coordinates (x, y, z) in the directions of the x axis, the y axis and the z axis obtained after the processing;
Figure FDA0002753297740000047
the total gray value of the pixel points located at the coordinates (x, y, z) is obtained after the processing; f'x,y,zRepresenting the final gray value of the pixel point located at coordinate (x, y, z); x, y and z are all natural numbers; and k is an image enhancement coefficient.
10. The method of any one of claims 1-9, wherein the raw image comprises a CT image, an MRI image, a PET-CT image, or an ultrasound image.
11. An image processing apparatus, characterized in that the apparatus comprises:
a dividing unit configured to divide the acquired original image into a plurality of regions according to a preset rule;
a calculation unit configured to calculate a total rate of change of the original gradation value of each of the regions;
and the processing unit is configured to judge whether the total change rate is greater than a preset value, call the constructed first noise reduction model to perform first-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be greater than or equal to the preset value, and call the constructed second noise reduction model to perform second-order total variation processing on the original gray value when the total change rate of the original gray value is judged to be less than the preset value.
12. A computer-readable storage medium, on which a computer program is stored, which, when executed, is capable of implementing the image processing method of any one of claims 1-10.
13. A computer arrangement, characterized in that the arrangement comprises a processor and a memory, wherein the memory has stored thereon a computer program which, when executed, causes the processor to carry out the image processing method according to any one of claims 1-10.
14. An image processing system, characterized in that the system comprises a computer device as claimed in claim 13 and a scanning device connected to the computer device.
15. The system of claim 14, wherein the scanning device comprises a CT scanner, an MRI scanner, a PET detector, a PET-CT device, or an ultrasound device.
CN202011193184.XA 2020-10-30 2020-10-30 Image processing method, device, equipment, system and computer readable storage medium Pending CN112365413A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011193184.XA CN112365413A (en) 2020-10-30 2020-10-30 Image processing method, device, equipment, system and computer readable storage medium
PCT/CN2021/118707 WO2022089079A1 (en) 2020-10-30 2021-09-16 Image processing method, apparatus and system, and device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011193184.XA CN112365413A (en) 2020-10-30 2020-10-30 Image processing method, device, equipment, system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112365413A true CN112365413A (en) 2021-02-12

Family

ID=74513316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011193184.XA Pending CN112365413A (en) 2020-10-30 2020-10-30 Image processing method, device, equipment, system and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112365413A (en)
WO (1) WO2022089079A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763275A (en) * 2021-09-09 2021-12-07 深圳市文立科技有限公司 Adaptive image noise reduction method and system and readable storage medium
WO2022089079A1 (en) * 2020-10-30 2022-05-05 湖北锐世数字医学影像科技有限公司 Image processing method, apparatus and system, and device and computer-readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116010996B (en) * 2023-03-01 2023-06-13 上海伯镭智能科技有限公司 Unmanned system development data safety management method
CN116485787B (en) * 2023-06-15 2023-08-22 东莞市立时电子有限公司 Method for detecting appearance defects of data line molding outer die
CN116721099B (en) * 2023-08-09 2023-11-21 山东奥洛瑞医疗科技有限公司 Image segmentation method of liver CT image based on clustering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324961A1 (en) * 2013-01-23 2015-11-12 Tencent Technology (Shenzhen) Co., Ltd. Method and apparatus for adjusting image brightness
US20170301081A1 (en) * 2015-09-30 2017-10-19 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a breast region in a medical image
WO2019023993A1 (en) * 2017-08-02 2019-02-07 深圳传音通讯有限公司 Method and device for processing photograph of intelligent terminal
CN109584185A (en) * 2018-12-19 2019-04-05 深圳市华星光电半导体显示技术有限公司 Image processing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654428A (en) * 2014-11-14 2016-06-08 联芯科技有限公司 Method and system for image noise reduction
CN110689496B (en) * 2019-09-25 2022-10-14 北京迈格威科技有限公司 Method and device for determining noise reduction model, electronic equipment and computer storage medium
CN112365413A (en) * 2020-10-30 2021-02-12 湖北锐世数字医学影像科技有限公司 Image processing method, device, equipment, system and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150324961A1 (en) * 2013-01-23 2015-11-12 Tencent Technology (Shenzhen) Co., Ltd. Method and apparatus for adjusting image brightness
US20170301081A1 (en) * 2015-09-30 2017-10-19 Shanghai United Imaging Healthcare Co., Ltd. System and method for determining a breast region in a medical image
WO2019023993A1 (en) * 2017-08-02 2019-02-07 深圳传音通讯有限公司 Method and device for processing photograph of intelligent terminal
CN109584185A (en) * 2018-12-19 2019-04-05 深圳市华星光电半导体显示技术有限公司 Image processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089079A1 (en) * 2020-10-30 2022-05-05 湖北锐世数字医学影像科技有限公司 Image processing method, apparatus and system, and device and computer-readable storage medium
CN113763275A (en) * 2021-09-09 2021-12-07 深圳市文立科技有限公司 Adaptive image noise reduction method and system and readable storage medium

Also Published As

Publication number Publication date
WO2022089079A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
CN112365413A (en) Image processing method, device, equipment, system and computer readable storage medium
CN109523584B (en) Image processing method and device, multi-modality imaging system, storage medium and equipment
US8401265B2 (en) Processing of medical image data
Razlighi et al. Evaluating similarity measures for brain image registration
CN109697740B (en) Image reconstruction method and device and computer equipment
CN111161269B (en) Image segmentation method, computer device, and readable storage medium
US20230036359A1 (en) Image reconstruction method, device,equipment, system, and computer-readable storage medium
JP2014179971A (en) Image processing system, image processing method, and program
CN111179372B (en) Image attenuation correction method, image attenuation correction device, computer equipment and storage medium
CN109544657B (en) Medical image iterative reconstruction method, device, computer equipment and storage medium
WO2021135773A1 (en) Image reconstruction method, apparatus, device, and system, and computer readable storage medium
WO2024066049A1 (en) Pet image denoising method, terminal device, and readable storage medium
WO2019167597A1 (en) Super-resolution processing device and method, and program
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
WO2021165053A1 (en) Out-of-distribution detection of input instances to a model
CN113112561B (en) Image reconstruction method and device and electronic equipment
CN111161330B (en) Non-rigid image registration method, device, system, electronic equipment and storage medium
Karthik et al. Automatic quality enhancement of medical diagnostic scans with deep neural image super-resolution models
CN111784733B (en) Image processing method, device, terminal and computer readable storage medium
CN114494014A (en) Magnetic resonance image super-resolution reconstruction method and device
CN112950457A (en) Image conversion method, training method of related model, related device and equipment
JP2019176532A (en) Image processing apparatus, image processing method, and program
CN112614205B (en) Image reconstruction method and device
CN116681715B (en) Blood vessel segmentation method, device, equipment and storage medium based on pixel value change
JP6543099B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination