CN112381714A - Image processing method, device, storage medium and equipment - Google Patents

Image processing method, device, storage medium and equipment Download PDF

Info

Publication number
CN112381714A
CN112381714A CN202011191970.6A CN202011191970A CN112381714A CN 112381714 A CN112381714 A CN 112381714A CN 202011191970 A CN202011191970 A CN 202011191970A CN 112381714 A CN112381714 A CN 112381714A
Authority
CN
China
Prior art keywords
image
characteristic
pixel point
pixel points
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011191970.6A
Other languages
Chinese (zh)
Inventor
方伟
徐玲
吴桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanyang Clear Technology Co Ltd
Original Assignee
Nanyang Clear Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanyang Clear Technology Co Ltd filed Critical Nanyang Clear Technology Co Ltd
Priority to CN202011191970.6A priority Critical patent/CN112381714A/en
Publication of CN112381714A publication Critical patent/CN112381714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method, a device, a storage medium and equipment for processing images, wherein the method comprises the following steps: the method comprises the steps of carrying out amplification processing on an original image by preset times based on an image linear scaling algorithm to obtain a first image, carrying out amplification processing on the original image by the preset times based on an image bending scaling algorithm to obtain a second image, extracting pixel points belonging to the linear image from the first image to obtain a first characteristic image, extracting pixel points belonging to the bending image from the second image to obtain a second characteristic image, and carrying out superposition operation on the first characteristic image and the second characteristic image to obtain a target image. The original image is amplified, and the amplified image is subjected to feature image extraction processing and superposition processing, so that a final target image is obtained, and the effect of improving the image quality of the obtained target image is achieved.

Description

Image processing method, device, storage medium and equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a storage medium, and a device.
Background
Dicom (digital imaging communications in medicine), which is interpreted literally, first, the application object is a digitized image; second, the core of the DICOM specification is "communication", and thus DICOM can be interpreted as "common specification for medical digitized image communication/communication". From the management perspective of the hospital, for example, a DICOM environment can be established in the whole hospital from top to bottom, and then subsystems with different characteristics are established according to the department needs to adapt to the department needs, so that a uniform image standard can be formed in the hospital, and the purpose of 'plug and play' when new equipment is added in the hospital is achieved. The DICOM standard is adopted as the basis for medical image communication between hospitals and internationally, for example, lossless image transmission communication in remote consultation is realized. The common DICOM image printing image processing method in the prior art can not process images with high image quality requirements and high resolution and can not meet hospital requirements.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a storage medium, and a device for image processing, which are directed to obtaining a final target image by performing an enlargement process, a feature image extraction process, and a superimposition process on an original image, thereby achieving an effect of improving the image quality of the obtained target image.
In a first aspect, the present application provides a method of image processing, the method comprising:
the method comprises the steps that an original image is amplified by a preset multiple based on an image linear scaling algorithm to obtain a first image, and the original image is amplified by the preset multiple based on an image bending scaling algorithm to obtain a second image;
extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
Optionally, the extracting pixel points belonging to a linear image from the first image to obtain a first feature image includes:
traversing each pixel point in the first image, finding out pixel points with the same color of adjacent transverse pixel points and adjacent longitudinal pixel points in the first image as first characteristic pixel points, wherein the color is a non-background color, and recording the coordinates of the first characteristic pixel points;
setting the color of a pixel point which is not the coordinate of the first characteristic pixel point in the first image as a background color to obtain the first characteristic image.
Optionally, the extracting, from the second image, pixel points belonging to the warped image to obtain a second feature image includes:
traversing each pixel point in the second image, finding out pixel points with the same color of adjacent transverse pixel points and adjacent longitudinal pixel points in the second image as second characteristic pixel points, wherein the color is a non-background color, and recording the coordinates of the second characteristic pixel points;
setting the color of a pixel point which is not the coordinate of the second characteristic pixel point in the second image as a background color to obtain the second characteristic image.
Optionally, the superimposing operation performed on the first feature image and the second feature image to obtain the target image includes:
determining target pixel points according to the pixel points at the corresponding positions in the first characteristic image and the second characteristic image;
and combining the target pixel points into the target image.
Optionally, the determining a target pixel point according to a pixel point at a corresponding position in the first feature image and the second feature image includes:
if the first characteristic pixel point exists in the corresponding position and the second characteristic pixel point does not exist, the first characteristic pixel point is reserved as a target pixel point;
if the second characteristic pixel point exists in the corresponding position and the first characteristic pixel point does not exist, the second characteristic pixel point is reserved as a target pixel point;
and if the first characteristic pixel point does not exist and the second characteristic pixel point does not exist in the corresponding position, replacing the pixel point in the corresponding position with a pixel point with background color as a target pixel point.
Optionally, the image linear scaling algorithm is a nearest neighbor interpolation algorithm, and the process of magnifying the original image by a preset magnification based on the image linear scaling algorithm to obtain the first image includes:
and amplifying the original image by the preset times based on the nearest neighbor interpolation algorithm to obtain the first image.
Optionally, the image warping and scaling algorithm is a bilinear interpolation algorithm, and the process of magnifying the original image by the preset multiple based on the image warping and scaling algorithm to obtain a second image includes:
and carrying out amplification processing of amplifying the original image by the preset times based on the bilinear interpolation algorithm to obtain the second image.
In a second aspect, the present application provides an image processing apparatus comprising:
the processing module is used for carrying out amplification processing on the original image by preset times based on an image linear scaling algorithm to obtain a first image, and carrying out amplification processing on the original image by the preset times based on an image bending scaling algorithm to obtain a second image;
the extraction module is used for extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and the superposition module is used for carrying out superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
In a third aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
the method comprises the steps that an original image is amplified by a preset multiple based on an image linear scaling algorithm to obtain a first image, and the original image is amplified by the preset multiple based on an image bending scaling algorithm to obtain a second image;
extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
In a fourth aspect, the present application provides an image processing apparatus comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
the method comprises the steps that an original image is amplified by a preset multiple based on an image linear scaling algorithm to obtain a first image, and the original image is amplified by the preset multiple based on an image bending scaling algorithm to obtain a second image;
extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
The embodiment of the invention has the following beneficial effects:
the image processing method, device, storage medium and equipment of the invention are adopted, and the method comprises the following steps: the method comprises the steps of carrying out amplification processing on an original image by preset times based on an image linear scaling algorithm to obtain a first image, carrying out amplification processing on the original image by the preset times based on an image bending scaling algorithm to obtain a second image, extracting pixel points belonging to the linear image from the first image to obtain a first characteristic image, extracting pixel points belonging to the bending image from the second image to obtain a second characteristic image, and carrying out superposition operation on the first characteristic image and the second characteristic image to obtain a target image. The original image is amplified, and the amplified image is subjected to feature image extraction processing and superposition processing, so that a final target image is obtained, and the effect of improving the image quality of the obtained target image is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a schematic flow chart illustrating a method of image processing according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating an original drawing in an embodiment of the present application;
FIG. 3 is a schematic diagram of a first image in an embodiment of the present application;
FIG. 4 is a schematic diagram of a second image in an embodiment of the present application;
FIG. 5 is a flowchart illustrating the step of refining the first feature image in step 102 in the embodiment shown in FIG. 1;
FIG. 6 is a diagram illustrating a first feature image according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating the refinement step of the second feature image in step 102 in the embodiment shown in FIG. 1 of the present application;
FIG. 8 is a diagram illustrating a second feature image according to an embodiment of the present application;
FIG. 9 is a flow chart illustrating a refinement step of step 103 in the embodiment of FIG. 1 of the present application;
FIG. 10 is a schematic illustration of a target image in an embodiment of the present application;
FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a block diagram of the image processing apparatus in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an image processing method according to an embodiment of the present application, the method including:
step 101, performing amplification processing on an original image by a preset magnification based on an image linear scaling algorithm to obtain a first image, and performing amplification processing on the original image by the preset magnification based on an image bending scaling algorithm to obtain a second image;
in the embodiment of the present application, the images on the pictures can be divided into two types according to the arrangement of the pixel points forming the images: one type is an image obtained by arranging pixel points in a nonlinear manner, called a bent image, such as an arc element in a picture; the other is that an image obtained by arranging pixel points in a linear manner is called a linear image, for example, a linear element in one picture. Therefore, the original in the embodiment of the present application includes a background portion and an image portion, and the image portion is composed of a warped image and a linear image. To better understand the technical solution in the embodiment of the present application, as shown in fig. 2, the original is a schematic diagram of an original in the embodiment of the present application, the original includes an upper region and a lower region, an image included in the upper region is a folded image, and an image included in the lower region is a linear image.
Specifically, the image linear scaling algorithm is a nearest neighbor interpolation algorithm, and performs amplification processing on the original image by a preset magnification based on the image linear scaling algorithm to obtain the first image, and specifically includes: and amplifying the original image by preset times based on the nearest neighbor interpolation algorithm to obtain a first image.
Among them, the Nearest neighbor interpolation algorithm is also called kNN algorithm or k-Nearest Neighbors (k-Nearest Neighbors), when a training set D and a test object z are given, the test object is a vector composed of attribute values and an unknown class label, the algorithm needs to calculate the distance (or similarity) between z and each training object, so as to determine the Nearest neighbor list, and then the class with the number of instances in the Nearest neighbor being superior is assigned to z.
In the embodiment of the application, the original image is amplified by a preset magnification through a nearest neighbor interpolation algorithm to obtain a first image, the nearest neighbor interpolation algorithm is to insert a preset number of preset pixel points around each pixel point in the original image, the preset pixel points have preset gray values, the gray values of the nearest neighbor preset pixel points around each pixel point in the original image are assigned to each pixel point in the original image, and then the image amplified by the preset magnification is obtained, and the image is the first image. The nearest neighbor interpolation algorithm has the advantages of small calculated amount and simple algorithm, so the operation speed is high. However, the nearest neighbor interpolation algorithm uses the gray value of the preset pixel point nearest to each pixel point in the original image as the gray value of each pixel point in the original image, and the influence of other adjacent pixel points is not considered, so that the gray value resampled by the nearest neighbor interpolation algorithm has obvious discontinuity at the bent image in the first image, the bent image in the first image has large quality loss, and obvious mosaic and sawtooth phenomena can be generated, but the gray value resampled by the nearest neighbor interpolation algorithm is obviously continuous at the linear image in the first image, and therefore, the linear image in the first image obtained after the original image is amplified by the preset magnification through the nearest neighbor interpolation algorithm is clear.
The preset multiple may be 10, 15, 20, or the like, and in practical applications, the value of the preset multiple may be set according to specific needs, which is not limited herein.
In order to better understand the technical solution in the embodiment of the present application, as shown in fig. 3, the first image is a schematic diagram of the first image in the embodiment of the present application, the first image is obtained by amplifying the original image shown in fig. 2 by 10 times through a nearest neighbor interpolation algorithm, the image included in the upper area of the first image is an image obtained by amplifying the folded image in the upper area in the original image by 10 times, and it is obvious from the schematic diagram that the folded image amplified by 10 times through the nearest neighbor interpolation algorithm generates obvious mosaic and jaggy phenomena. The image included in the lower area in the first image is an image obtained by amplifying the linear image in the lower area in the original image by 10 times, and it is obvious from the schematic diagram that the linear image amplified by 10 times by the nearest neighbor interpolation algorithm is very clear.
In this embodiment of the present application, the image warping and scaling algorithm is a bilinear interpolation algorithm, and specifically, the performing, based on the image warping and scaling algorithm, the amplification processing of amplifying the original image by the preset factor to obtain a second image includes: and performing amplification processing on the original image by preset times based on a bilinear interpolation algorithm to obtain a second image.
The bilinear interpolation algorithm is also called as a bilinear interpolation algorithm. Mathematically, bilinear interpolation is linear interpolation extension of an interpolation function with two variables, and the core idea is to perform linear interpolation in two directions respectively. Bilinear interpolation is used as an interpolation algorithm in numerical analysis and is widely applied to the aspects of signal processing, digital image and video processing and the like.
Specifically, when the original size is expressed as m × n and the second image size is expressed as a × b, the side length ratio of the original and the second image is: m/a and n/b, the (i, j) th pixel point (i row, j column) in the second image can be corresponding to the original image through the side length ratio, and the corresponding coordinate is (i × (m/a), j × (n/b)). Obviously, this corresponding coordinate is not an integer in general, and the non-integer coordinate cannot be used on such discrete data of the image. Bilinear interpolation calculates the value (gray value or RGB value) of the point by searching for four pixel points nearest to the corresponding coordinate, if the image isFor a grayscale image, then the mathematical model of the grayscale values for the (i, j) points is: f (i, j) ═ h1+h2i+h3j+h4ij wherein h1,h2,h3,h4Are the coefficients of the correlation.
The bilinear interpolation algorithm is characterized in that when bilinear interpolation is adopted for four adjacent pixel points in the original image, the bilinear gray level interpolation can enable the bent image in the original image to be smooth after being amplified by preset times, the effect of the bent image after being amplified is good, but the bilinear gray level interpolation can enable the gray value of the linear image in the original image to be discontinuous after being amplified by the preset times, and the amplified linear image is fuzzy, so that the second image has the smooth bent image and the fuzzy linear image.
In order to better understand the technical solution in the embodiment of the present application, as shown in fig. 4, the second image in the embodiment of the present application is a schematic diagram, the second image is an image obtained by amplifying the original image shown in fig. 2 by 10 times through a bilinear interpolation algorithm, an image included in an upper area of the second image is an image obtained by amplifying the folded image in the upper area in the original image by 10 times, and it is obvious from the schematic diagram that the folded image amplified by 10 times through the bilinear interpolation algorithm is very smooth. The image included in the lower area of the second image is an image obtained by amplifying the linear image in the lower area of the original image by 10 times, and it is obvious from the schematic diagram that the linear image amplified by 10 times through the bilinear interpolation algorithm is blurred.
102, extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
in the embodiment of the application, the original image is composed of different linear images and different bent images, and as can be known from the nearest neighbor interpolation algorithm and the bilinear interpolation algorithm, the original image is amplified by a preset magnification factor through the nearest neighbor interpolation algorithm to obtain a first image, the first image comprises a clear amplified linear image and an unclear amplified bent image, the original image is amplified by the preset magnification factor through the bilinear interpolation algorithm to obtain a second image, and the second image comprises a clear amplified bent image and an unclear amplified linear image.
In the embodiment of the present application, as shown in fig. 5, a schematic flowchart of the step of refining the first feature image in step 102 in fig. 1 of the present application is shown, where the method includes:
step 501, traversing each pixel point in the first image, finding out pixel points with the same color of adjacent transverse pixel points and adjacent longitudinal pixel points in the first image as first characteristic pixel points, wherein the color is a non-background color, and recording the coordinates of the first characteristic pixel points;
step 502, setting the color of a pixel point in the first image, which is not the coordinate of the first characteristic pixel point, as a background color, so as to obtain the first characteristic image.
Specifically, a coordinate system is established in a first image, each pixel point in the first image is traversed in the coordinate system, whether the color of the adjacent transverse pixel point of the pixel point is the same as that of the adjacent longitudinal pixel point is determined for the traversed pixel point, the color is a non-background color, if the color of the adjacent transverse pixel point of the pixel point is the same as that of the adjacent longitudinal pixel point, the pixel point is determined to be a first characteristic pixel point, the coordinate of the pixel point is recorded, if the color of the adjacent transverse pixel point of the pixel point is different from that of the adjacent longitudinal pixel point, the pixel point is determined to be a non-first characteristic pixel point, the pixel value of the pixel point is set to be a background color, and then a first characteristic image is obtained, wherein the first characteristic image is composed of the first characteristic pixel point and the non-first characteristic pixel point.
It can be understood that the first image obtained by amplifying the original image by the preset magnification through the nearest neighbor interpolation algorithm includes a clear amplified linear image and an unclear amplified bent image, and therefore, the first characteristic pixel points are pixel points of all clear amplified linear images. Specifically, as shown in fig. 6, a schematic diagram of a first feature image in the embodiment of the present application is shown.
In the embodiment of the present application, as shown in fig. 7, a flowchart of the step of refining the second feature image in step 102 in fig. 1 of the present application is shown, where the method includes:
step 701, traversing each pixel point in the second image, finding out pixel points with the same color of adjacent horizontal pixel points and adjacent vertical pixel points in the second image as second characteristic pixel points, wherein the color is a non-background color, and recording the coordinates of the second characteristic pixel points;
step 702, setting the color of a pixel point in the second image, which is not the coordinate of the second characteristic pixel point, as a background color, so as to obtain the second characteristic image.
Specifically, a coordinate system is established in the second image, each pixel point in the second image is traversed in the coordinate system, whether the color of the adjacent transverse pixel point of the pixel point is the same as that of the adjacent longitudinal pixel point is determined for the traversed pixel point, the color is a non-background color, if the color of the adjacent transverse pixel point of the pixel point is the same as that of the adjacent longitudinal pixel point, the pixel point is determined to be a second characteristic pixel point, the coordinate of the pixel point is recorded, if the color of the adjacent transverse pixel point of the pixel point is different from that of the adjacent longitudinal pixel point, the pixel point is determined to be a non-second characteristic pixel point, the pixel value of the pixel point is set to be a background color, and a second characteristic image is obtained, wherein the second characteristic image is composed of the second characteristic pixel point and the non-second characteristic pixel point.
It can be understood that the second image obtained by amplifying the original image by the preset magnification through the bilinear interpolation algorithm includes a sharp amplified folded image and an unclear amplified linear image, and therefore, the second characteristic pixel points are pixel points of all sharp amplified folded images. Specifically, as shown in fig. 8, a schematic diagram of a second feature image in the embodiment of the present application is shown.
And 103, performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
In the embodiment of the present application, as shown in fig. 9, a schematic flowchart of a refinement step of step 103 in the embodiment shown in fig. 1 of the present application is shown, where the method includes:
step 901, determining a target pixel point according to pixel points at corresponding positions in the first characteristic image and the second characteristic image;
specifically, if a first characteristic pixel point exists at a corresponding position and a second characteristic pixel point does not exist, the first characteristic pixel point is reserved as a target pixel point; if the second characteristic pixel point exists in the corresponding position and the first characteristic pixel point does not exist, the second characteristic pixel point is reserved as a target pixel point; if the first characteristic pixel point does not exist and the second characteristic pixel point does not exist in the corresponding position, the pixel point in the corresponding position is replaced by the pixel point with the background color to serve as the target pixel point.
And step 902, combining the target pixel points into the target image.
In the embodiment of the present application, it can be known from the above that the first characteristic image is composed of first characteristic pixel points and non-first characteristic pixel points with a background color, the first characteristic pixel points are pixel points of all clear amplified linear images in the first image obtained by amplifying the original image by the preset amplification factor through the nearest neighbor interpolation algorithm, the second characteristic image is composed of second characteristic pixel points and non-second characteristic pixel points with a background color, and the second characteristic pixel points are pixel points of all clear amplified bent images in the second image obtained by amplifying the original image by the preset amplification factor through the bilinear interpolation algorithm.
Further, the first feature image and the second feature image are superimposed, and the position of each pixel point in the superimposed image of the first feature image and the second feature image includes a pixel point at the position in the first feature image and a pixel point at the position in the second feature image, so that two pixel points at the position of each pixel point in the superimposed image need to be compared, confirmed and selected. If a first characteristic pixel point exists at the position of a pixel point on the superposed image and a second characteristic pixel point does not exist, reserving the first characteristic pixel point as a target pixel point; if a second characteristic pixel point exists at the position of one pixel point on the superposed image but the first characteristic pixel point does not exist, reserving the second characteristic pixel point as a target pixel point; if neither the first characteristic pixel nor the second characteristic pixel exists at the position of one pixel point on the superposed image, the pixel point at the position of the pixel point is replaced by the pixel point with the background color as the target pixel point, and then the target image composed of the target pixel points can be finally obtained.
In order to better understand the technical solution in the embodiment of the present application, as shown in fig. 10, which is a schematic diagram of a target image in the embodiment of the present application, a first image as shown in fig. 3 is obtained by performing an amplification operation of amplifying the original image shown in fig. 2 by 10 times through a nearest neighbor interpolation algorithm, a second image as shown in fig. 4 is obtained by performing an amplification operation of amplifying the original image shown in fig. 2 by 10 times through a bilinear interpolation algorithm, a first feature image as shown in fig. 6 is obtained by extracting first feature pixel points in the first image, and a second feature image as shown in fig. 8 is obtained by extracting second feature pixel points in the second image, so that the target image as shown in fig. 10 is an image obtained by the superposition processing of the first feature image and the second feature image.
It can be understood that the target image includes all the first characteristic pixel points, all the second characteristic pixel points and pixel points with background colors, so that the finally obtained target image includes all the clear amplified linear images in the first image and all the clear amplified bent images in the second image.
In an embodiment of the present application, a method of image processing includes: the method comprises the steps of carrying out amplification processing on an original image by preset times based on an image linear scaling algorithm to obtain a first image, carrying out amplification processing on the original image by the preset times based on an image bending scaling algorithm to obtain a second image, extracting pixel points belonging to the linear image from the first image to obtain a first characteristic image, extracting pixel points belonging to the bending image from the second image to obtain a second characteristic image, and carrying out superposition operation on the first characteristic image and the second characteristic image to obtain a target image. The original image is composed of different linear images and different bent images, the original image is amplified through a nearest neighbor interpolation algorithm and a bilinear interpolation algorithm, and the amplified images are subjected to feature image extraction processing and superposition processing, so that final target images composed of all clear linear images in the amplified original image and all clear bent images in the amplified original image are obtained, and the effect of improving the image quality of the obtained target images is achieved.
The image processing method in the embodiment of the application can be applied to solving the problem of high-resolution printing in the DICOM (digital Imaging and Communications in medicine), namely, the medical digital Imaging and communication image printing process, and can also improve the quality of the image. DICOM is widely used in medical imaging devices for radiation medicine, cardiovascular imaging and radiation diagnosis and treatment, such as X-ray, CT, nuclear magnetic resonance, ultrasound and the like, and is also increasingly used in ophthalmology, dentistry and other medical fields. The medical image generated by the medical imaging equipment comprises an image part and a character part, wherein the image part is a bent image, the character part is a linear image, the medical image is amplified by a preset multiple through a nearest neighbor interpolation algorithm to obtain an amplified first medical image with an image part with mosaic and sawtooth phenomena and a clear character part, the medical image is amplified by the preset multiple through a bilinear interpolation algorithm to obtain an amplified second medical image with a smooth image part and a fuzzy character part, a first characteristic medical image is extracted from the first medical image, the first characteristic medical image comprises first characteristic pixel points and non-first characteristic pixel points with background colors, the first characteristic pixel points are pixel points forming the clear character part, the second characteristic medical image is extracted from the second medical image, the second characteristic medical image comprises second characteristic pixel points and non-second characteristic pixel points with background colors, the second characteristic pixel points are pixel points forming a smooth image part, the first characteristic medical image and the second characteristic medical image are superposed to obtain a final target medical image, the target medical image comprises the first characteristic pixel points, the second characteristic pixel points and pixel points with the background colors, namely the target medical image comprises a clear text part, a smooth image part and a background part, and therefore the purpose of improving the quality of the target medical image through the image processing method in the application is achieved.
As shown in fig. 11, which is a schematic structural diagram of an image processing apparatus in an embodiment of the present application, the apparatus includes:
the processing module 1101 is configured to perform amplification processing on an original image by a preset magnification based on an image linear scaling algorithm to obtain a first image, and perform amplification processing on the original image by the preset magnification based on an image bending scaling algorithm to obtain a second image;
an extracting module 1102, configured to extract pixel points belonging to a linear image from the first image to obtain a first feature image, and extract pixel points belonging to a bent image from the second image to obtain a second feature image;
a superimposing module 1103, configured to perform a superimposing operation on the first feature image and the second feature image to obtain a target image.
In an embodiment of the present application, an image processing apparatus includes: the image processing device comprises a processing module 1101, which is used for performing amplification processing on an original image by a preset magnification based on an image linear scaling algorithm to obtain a first image, performing amplification processing on the original image by the preset magnification based on an image bending scaling algorithm to obtain a second image, an extracting module 1102, which is used for extracting pixel points belonging to the linear image from the first image to obtain a first characteristic image, extracting pixel points belonging to the bent image from the second image to obtain a second characteristic image, and a superposing module 1103, which is used for performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image. The image processing device is called to amplify the original image, and the amplified image is subjected to feature image extraction processing and superposition processing, so that a final target image is obtained, and the effect of improving the image quality of the obtained target image is achieved.
Fig. 12 is a diagram showing an internal structure of an image processing apparatus in one embodiment. The image processing device may specifically be a terminal or a server. As shown in fig. 12, the image processing apparatus includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the image processing apparatus stores an operating system and may also store a computer program that, when executed by a processor, causes the processor to implement a method of image processing. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a method of image processing. It will be understood by those skilled in the art that the structure shown in fig. 12 is a block diagram of only a part of the structure related to the present application, and does not constitute a limitation of the image processing apparatus to which the present application is applied, and a specific image processing apparatus may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
In one embodiment, an image processing apparatus is presented, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
the method comprises the steps that an original image is amplified by a preset multiple based on an image linear scaling algorithm to obtain a first image, and the original image is amplified by the preset multiple based on an image bending scaling algorithm to obtain a second image;
extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
In one embodiment, a computer-readable storage medium is proposed, in which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of:
the method comprises the steps that an original image is amplified by a preset multiple based on an image linear scaling algorithm to obtain a first image, and the original image is amplified by the preset multiple based on an image bending scaling algorithm to obtain a second image;
extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of image processing, the method comprising:
the method comprises the steps that an original image is amplified by a preset multiple based on an image linear scaling algorithm to obtain a first image, and the original image is amplified by the preset multiple based on an image bending scaling algorithm to obtain a second image;
extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and performing superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
2. The method according to claim 1, wherein the extracting pixel points belonging to a linear image from the first image to obtain a first feature image comprises:
traversing each pixel point in the first image, finding out pixel points with the same color of adjacent transverse pixel points and adjacent longitudinal pixel points in the first image as first characteristic pixel points, wherein the color is a non-background color, and recording the coordinates of the first characteristic pixel points;
setting the color of a pixel point which is not the coordinate of the first characteristic pixel point in the first image as a background color to obtain the first characteristic image.
3. The method according to claim 1, wherein the extracting pixel points belonging to the warped image from the second image to obtain a second feature image comprises:
traversing each pixel point in the second image, finding out pixel points with the same color of adjacent transverse pixel points and adjacent longitudinal pixel points in the second image as second characteristic pixel points, wherein the color is a non-background color, and recording the coordinates of the second characteristic pixel points;
setting the color of a pixel point which is not the coordinate of the second characteristic pixel point in the second image as a background color to obtain the second characteristic image.
4. The method of claim 3, wherein the superimposing the first feature image and the second feature image to obtain a target image comprises:
determining target pixel points according to the pixel points at the corresponding positions in the first characteristic image and the second characteristic image;
and combining the target pixel points into the target image.
5. The method of claim 4, wherein determining the target pixel point according to the pixel points at the corresponding positions in the first feature image and the second feature image comprises:
if the first characteristic pixel point exists in the corresponding position and the second characteristic pixel point does not exist, the first characteristic pixel point is reserved as a target pixel point;
if the second characteristic pixel point exists in the corresponding position and the first characteristic pixel point does not exist, the second characteristic pixel point is reserved as a target pixel point;
and if the first characteristic pixel point does not exist and the second characteristic pixel point does not exist in the corresponding position, replacing the pixel point in the corresponding position with a pixel point with background color as a target pixel point.
6. The method of claim 1, wherein the image linear scaling algorithm is a nearest neighbor interpolation algorithm, and the performing a preset magnification process on the original image based on the image linear scaling algorithm to obtain the first image comprises:
and amplifying the original image by the preset times based on the nearest neighbor interpolation algorithm to obtain the first image.
7. The method of claim 1, wherein the image warping and scaling algorithm is a bilinear interpolation algorithm, and the performing the magnifying process of magnifying the original image by the preset factor based on the image warping and scaling algorithm to obtain the second image includes:
and carrying out amplification processing of amplifying the original image by the preset times based on the bilinear interpolation algorithm to obtain the second image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the processing module is used for carrying out amplification processing on the original image by preset times based on an image linear scaling algorithm to obtain a first image, and carrying out amplification processing on the original image by the preset times based on an image bending scaling algorithm to obtain a second image;
the extraction module is used for extracting pixel points belonging to a linear image from the first image to obtain a first characteristic image, and extracting pixel points belonging to a bent image from the second image to obtain a second characteristic image;
and the superposition module is used for carrying out superposition operation on the first characteristic image and the second characteristic image to obtain a target image.
9. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
10. An image processing apparatus comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 7.
CN202011191970.6A 2020-10-30 2020-10-30 Image processing method, device, storage medium and equipment Pending CN112381714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011191970.6A CN112381714A (en) 2020-10-30 2020-10-30 Image processing method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011191970.6A CN112381714A (en) 2020-10-30 2020-10-30 Image processing method, device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN112381714A true CN112381714A (en) 2021-02-19

Family

ID=74577391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011191970.6A Pending CN112381714A (en) 2020-10-30 2020-10-30 Image processing method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN112381714A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170803A1 (en) * 2010-01-14 2011-07-14 Fujitsu Semiconductor Limited Apparatus and method for image processing
CN104299185A (en) * 2014-09-26 2015-01-21 京东方科技集团股份有限公司 Image magnification method, image magnification device and display device
CN108681992A (en) * 2018-04-23 2018-10-19 南京理工大学 The image interpolation algorithm of laser facula is measured for detector array method
CN109978766A (en) * 2019-03-12 2019-07-05 深圳市华星光电技术有限公司 Image magnification method and image amplifying device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170803A1 (en) * 2010-01-14 2011-07-14 Fujitsu Semiconductor Limited Apparatus and method for image processing
CN104299185A (en) * 2014-09-26 2015-01-21 京东方科技集团股份有限公司 Image magnification method, image magnification device and display device
CN108681992A (en) * 2018-04-23 2018-10-19 南京理工大学 The image interpolation algorithm of laser facula is measured for detector array method
CN109978766A (en) * 2019-03-12 2019-07-05 深圳市华星光电技术有限公司 Image magnification method and image amplifying device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卢君;张起贵;: "插值算法在图像缩放中的评估研究", 同煤科技, no. 01 *

Similar Documents

Publication Publication Date Title
EP3816928A1 (en) Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and computer-readable storage medium
DE69937476T2 (en) Image processing apparatus and method and storage medium
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
CN107895345A (en) A kind of method and apparatus for improving facial image resolution ratio
CN111291813B (en) Image labeling method, device, computer equipment and storage medium
DE19715491A1 (en) Interpolation method and device for rapid image enlargement
CN108109109B (en) Super-resolution image reconstruction method, device, medium and computing equipment
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
CN111681165A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
JP2009524861A (en) Method and apparatus for improving resolution of digital image
CN112132836A (en) Video image clipping method and device, electronic equipment and storage medium
CN111314688B (en) Disparity map hole filling method and device and electronic system
CN114449181B (en) Image and video processing method and system, data processing device and medium
CN111179173A (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
KR20200099633A (en) Method and computer program for analyzing texture of an image
CN107220934A (en) Image rebuilding method and device
CN112801876B (en) Information processing method and device, electronic equipment and storage medium
CN112801879B (en) Image super-resolution reconstruction method and device, electronic equipment and storage medium
CN112381714A (en) Image processing method, device, storage medium and equipment
CN112883983A (en) Feature extraction method and device and electronic system
CN113570531B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
JP3161013B2 (en) Region separation method and image processing apparatus
KR102414299B1 (en) System and method for enhancing quality of CT(computed tomography) scan using AI(artificial intelligence)
CN102842111B (en) Enlarged image compensation method and device
JPH07334648A (en) Method and device for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination