CN118037542A - Image processing method, device, equipment and storage medium - Google Patents
Image processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN118037542A CN118037542A CN202410160426.7A CN202410160426A CN118037542A CN 118037542 A CN118037542 A CN 118037542A CN 202410160426 A CN202410160426 A CN 202410160426A CN 118037542 A CN118037542 A CN 118037542A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- region
- interest
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 79
- 238000000034 method Methods 0.000 claims abstract description 27
- 239000011159 matrix material Substances 0.000 claims description 112
- 238000001514 detection method Methods 0.000 claims description 51
- 150000003278 haem Chemical class 0.000 claims description 36
- 230000015654 memory Effects 0.000 claims description 27
- 238000003708 edge detection Methods 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 8
- 102220012167 rs114048678 Human genes 0.000 claims description 5
- 102220094200 rs876659020 Human genes 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 18
- 230000008569 process Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 5
- 210000004204 blood vessel Anatomy 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000001575 pathological effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 210000001035 gastrointestinal tract Anatomy 0.000 description 1
- 230000000968 intestinal effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of medical equipment, and discloses an image processing method, an image processing device, image processing equipment and a storage medium. The method comprises the following steps: acquiring an image to be processed corresponding to a target object; identifying a target region of interest in the image to be processed; performing interpolation processing on the target region of interest based on a first interpolation algorithm to obtain a first processed image; interpolation processing is carried out on other areas in the image to be processed based on a second interpolation algorithm, so that a second processed image is obtained, and the algorithm performance of the first interpolation algorithm is superior to that of the second interpolation algorithm; and fusing the first processing image and the second processing image to obtain a target display image. The invention can solve the problem of poor image scaling effect caused by limited operation speed of the processor, so as to improve the image scaling effect.
Description
Technical Field
The present invention relates to the technical field of medical devices, and in particular, to an image processing method, an image processing apparatus, a computer device, and a computer readable storage medium.
Background
With the development of technology, the variety of high-definition electronic endoscopes is becoming more and more abundant, and high-resolution image sensors are also widely used in various endoscopes to acquire high-resolution images. Currently, when a user views a high-resolution image acquired by an image sensor, the user often needs to electronically zoom in or out on the image without changing the resolution, so as to observe a region of interest in the image. In order to ensure the display effect of the zoomed image, a high-quality interpolation algorithm is often needed to perform interpolation processing on the high-resolution image.
However, since high quality interpolation algorithms are generally complex, the occupation of computational resources is large, and the rate of operation of the processor of the endoscope is limited. Therefore, it is difficult for the processor to perform interpolation processing of a high-quality algorithm on all pixels of the high-resolution image, resulting in poor image scaling.
Disclosure of Invention
In view of the above, the present invention provides an image processing method, apparatus, device and storage medium to solve the problem of poor image scaling effect caused by the limited operation rate of a processor.
In a first aspect, the present invention provides an image processing method, the method comprising:
acquiring an image to be processed corresponding to a target object;
Identifying a target region of interest in the image to be processed;
performing interpolation processing on the target region of interest based on a first interpolation algorithm to obtain a first processed image;
Performing interpolation processing on other areas in the image to be processed based on a second interpolation algorithm to obtain a second processed image, wherein the algorithm performance of the first interpolation algorithm is superior to that of the second interpolation algorithm;
And fusing the first processing image and the second processing image to obtain a target display image.
According to the image processing method provided by the embodiment of the invention, the target region of interest in the image to be processed is identified, interpolation processing is carried out on the target region of interest based on the first interpolation algorithm with better algorithm performance, interpolation processing is carried out on other regions of the image to be processed based on the second interpolation algorithm with relatively weaker algorithm performance, and then the first processed image and the second processed image which are obtained through processing are fused, so that the target display image is obtained. Therefore, interpolation processing is not required to be carried out on all pixels of the image to be processed by adopting a high-quality interpolation algorithm, so that the computing resource is saved, and the power consumption is reduced. Meanwhile, the image scaling effect of the target region of interest can be ensured, so that the problem of poor image scaling effect caused by limited operation speed of the processor is solved to a certain extent, and the image scaling effect is improved.
In some optional embodiments, the identifying the target region of interest in the image to be processed includes:
Decoding the image to be processed to obtain original image data;
calculating a target direction operator of the original image data on a target color component;
Performing edge detection on the original image data based on the target direction operator to obtain a first region of interest;
and determining a target region of interest in the image to be processed based on the first region of interest.
In some optional embodiments, the performing edge detection on the raw image data based on the target direction operator to obtain a first region of interest includes:
Performing matrix division on pixels in the original image data to obtain a plurality of first pixel matrixes;
Obtaining edge state information corresponding to the first pixel matrix based on a target direction operator corresponding to the first pixel matrix and a preset detection value corresponding to the target direction operator;
and when the edge state information corresponding to any one of the first pixel matrixes is edge information, taking an area corresponding to the first pixel matrix in the original image data as the first region of interest.
In some optional embodiments, the obtaining the edge state information corresponding to the first pixel matrix based on the target direction operator corresponding to the first pixel matrix and the preset detection value corresponding to the target direction operator includes:
If any one of the target direction operators of the first pixel matrix is larger than the corresponding preset detection value, determining that the edge state information corresponding to the first pixel matrix is edge information in the first pixel matrix;
If all the target direction operators of the first pixel matrix are smaller than or equal to the corresponding preset detection values, determining that the edge state information corresponding to the first pixel matrix is that no edge information exists in the first pixel matrix.
In some alternative embodiments, the calculating the target direction operator of the raw image data on the target color component includes:
acquiring component image data of the original image data on the target color component;
A target direction operator for a plurality of detection directions is calculated based on pixel values of the component image data in the plurality of detection directions.
In some alternative embodiments, the plurality of detection directions includes: 135 ° direction, 45 ° direction, horizontal direction, and vertical direction; the target direction operators of the detection directions are calculated by the following formulas:
G135=|G(i+1,j-1)-G(i-1,j+1)|;
G45=|G(i+1,j+1)-G(i-1,j-1)|;
G0=|G(i-1,j-1)+G(i+1,j-1)-G(i-1,j+1)-G(i+1,j+1)|/2;
G90=|G(i-1,j-1)+G(i-1,j+1)-G(i+1,j-1)-G(i+1,j+1)|/2;
Wherein G135 represents the target direction operator in the 135 ° direction, G45 represents the target direction operator in the 45 ° direction, G0 represents the target direction operator in the horizontal direction, G90 represents the target direction operator in the vertical direction, G (i+1, j-1) represents the pixel value of the component image data in the i+1th row and the j-1 th column, G (i-1, j+1) represents the pixel value of the component image data in the i-1 th row and the j+1th column, G (i+1, j+1) represents the pixel value of the component image data in the i+1th row and the j+1th column, and G (i-1, j-1) represents the pixel value of the component image data in the i-1 th row and the j-1 th column.
In some optional embodiments, the determining, based on the first region of interest, a target region of interest in the image to be processed includes:
Processing the original image data to obtain target image data in RGB format;
calculating heme concentration values corresponding to all pixels in the target image data;
Determining a preset pseudo color display range;
Performing pseudo color identification on the original image data based on the heme concentration value and the preset pseudo color display range to obtain a second region of interest;
fitting the first region of interest and the second region of interest to obtain a third region of interest;
And taking a region corresponding to the third region of interest in the image to be processed as the target region of interest.
In some optional embodiments, the performing, based on the heme concentration value and the preset pseudo color display range, pseudo color identification on the raw image data to obtain a second region of interest includes:
if the heme concentration value is in the preset pseudo color display range, marking the pixel corresponding to the heme concentration value;
performing matrix division on pixels in the target image data to obtain a plurality of second pixel matrixes;
determining whether marked pixels exist in each second pixel matrix;
And if any marked pixel exists in the second pixel matrix, taking the region corresponding to the second pixel matrix in the target image data as the second region of interest.
In some alternative embodiments, the heme concentration value is calculated as follows:
wherein IHB is the heme concentration value, P is the red component or the blue component of the pixel in the target image data, G is the green component of the pixel in the target image data, and C is a constant.
In a second aspect, the present invention provides an image processing apparatus comprising:
the image acquisition module is used for acquiring an image to be processed corresponding to the target object;
the region identification module is used for identifying a target region of interest in the image to be processed;
the first processing module is used for carrying out interpolation processing on the target region of interest based on a first interpolation algorithm to obtain a first processed image;
The second processing module is used for carrying out interpolation processing on other areas in the image to be processed based on a second interpolation algorithm to obtain a second processed image, and the algorithm performance of the first interpolation algorithm is superior to that of the second interpolation algorithm;
And the image fusion module is used for fusing the first processing image and the second processing image to obtain a target display image.
In some alternative embodiments, the region identification module includes:
the image decoding unit is used for decoding the image to be processed to obtain original image data;
An operator calculating unit for calculating a target direction operator of the original image data on a target color component;
the edge detection unit is used for carrying out edge detection on the original image data based on the target direction operator to obtain a first region of interest;
And the region determining unit is used for determining a target region of interest in the image to be processed based on the first region of interest.
In some alternative embodiments, the edge detection unit includes:
A pixel matrix dividing subunit, configured to perform matrix division on pixels in the original image data, so as to obtain a plurality of first pixel matrices;
An edge information generating subunit, configured to obtain edge state information corresponding to the first pixel matrix based on a target direction operator corresponding to the first pixel matrix and a preset detection value corresponding to the target direction operator;
And the first region identification subunit is used for taking a region corresponding to the first pixel matrix in the original image data as the first region of interest when the edge state information corresponding to any one of the first pixel matrixes is edge information.
In some alternative embodiments, the edge information generation subunit is specifically configured to:
If any one of the target direction operators of the first pixel matrix is larger than the corresponding preset detection value, determining that the edge state information corresponding to the first pixel matrix is edge information in the first pixel matrix;
If all the target direction operators of the first pixel matrix are smaller than or equal to the corresponding preset detection values, determining that the edge state information corresponding to the first pixel matrix is that no edge information exists in the first pixel matrix.
In some alternative embodiments, the operator computing unit includes:
a component image acquisition subunit configured to acquire component image data of the original image data on the target color component;
A direction operator calculating subunit for calculating a target direction operator of the plurality of detection directions based on pixel values of the component image data in the plurality of detection directions.
In some alternative embodiments, the plurality of detection directions includes: 135 ° direction, 45 ° direction, horizontal direction, and vertical direction; the direction operator calculation subunit calculates the target direction operators of the plurality of detection directions by the following formula:
G135=|G(i+1,j-1)-G(i-1,j+1)|;
G45=|G(i+1,j+1)-G(i-1,j-1)|;
G0=|G(i-1,j-1)+G(i+1,j-1)-G(i-1,j+1)-G(i+1,j+1)|/2;
G90=|G(i-1,j-1)+G(i-1,j+1)-G(i+1,j-1)-G(i+1,j+1)|/2;
Wherein G135 represents the target direction operator in the 135 ° direction, G45 represents the target direction operator in the 45 ° direction, G0 represents the target direction operator in the horizontal direction, G90 represents the target direction operator in the vertical direction, G (i+1, j-1) represents the pixel value of the component image data in the i+1th row and the j-1 th column, G (i-1, j+1) represents the pixel value of the component image data in the i-1 th row and the j+1th column, G (i+1, j+1) represents the pixel value of the component image data in the i+1th row and the j+1th column, and G (i-1, j-1) represents the pixel value of the component image data in the i-1 th row and the j-1 th column.
In some alternative embodiments, the area determining unit includes:
an image color conversion subunit, configured to process the original image data to obtain target image data in RGB format;
a heme calculating subunit, configured to calculate a heme concentration value corresponding to each pixel in the target image data;
A pseudo color range determining subunit, configured to determine a preset pseudo color display range;
The second region identification subunit is used for carrying out false color identification on the original image data based on the heme concentration value and the preset false color display range to obtain a second region of interest;
a key region fitting subunit, configured to fit the first region of interest and the second region of interest to obtain a third region of interest;
and the key region determining subunit is used for taking a region corresponding to the third region of interest in the image to be processed as the target region of interest.
In some alternative embodiments, the second region identification subunit is specifically configured to:
if the heme concentration value is in the preset pseudo color display range, marking the pixel corresponding to the heme concentration value;
performing matrix division on pixels in the target image data to obtain a plurality of second pixel matrixes;
determining whether marked pixels exist in each second pixel matrix;
And if any marked pixel exists in the second pixel matrix, taking the region corresponding to the second pixel matrix in the target image data as the second region of interest.
In some alternative embodiments, the heme calculation subunit calculates the heme concentration value by the following formula:
wherein IHB is the heme concentration value, P is the red component or the blue component of the pixel in the target image data, G is the green component of the pixel in the target image data, and C is a constant.
In a third aspect, the present invention provides a computer device comprising: the image processing device comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the image processing method of the first aspect or any corresponding implementation mode of the first aspect is executed.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the image processing method of the first aspect or any of its corresponding embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a first image processing method according to an embodiment of the present invention;
Fig. 2 is a flowchart of a second image processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a third image processing method according to an embodiment of the present invention;
Fig. 4 is a flowchart of a fourth image processing method according to an embodiment of the present invention;
fig. 5 is a flowchart of a fifth image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic view of an endoscope according to the present invention;
Fig. 7 is a flowchart of a sixth image processing method according to an embodiment of the present invention;
Fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 9 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Along with development of technology, the variety of high-definition electronic endoscopes is more and more abundant, the application scenes of the high-resolution image sensor are more and more diversified, and the requirement of the high-resolution image on the operation speed of a processor is higher and more. Therefore, in order to accommodate high resolution application scenarios, it is necessary to increase the rate at which the processor processes the images.
Currently, when a user views a high-resolution image acquired by an image sensor, the user often needs to electronically zoom in or out on the image without changing the resolution, so as to observe a region of interest in the image. In contrast, when an endoscope performs electronic enlargement and reduction processing on a high-resolution image, it is necessary to perform interpolation processing on the image by using a high-quality interpolation algorithm.
Furthermore, as the resolution and refresh frequency of the image sensor of the endoscope increases, the amount of image pixel clock and data acquired increases exponentially. When it is desired to display an image acquired by an image sensor on a high-resolution display, since the resolution of the display is often higher than that of the image sensor, it is often necessary to interpolate the image by using a high-quality interpolation algorithm in order to improve the experience.
However, since the image processing rate is limited, it is difficult to process all pixels using a high-computation-amount algorithm, resulting in poor image scaling and display effects of the endoscope.
Based on this, the embodiment of the invention provides an image processing method, an image processing device and a storage medium, which can solve the problem of poor image scaling effect caused by limited operation speed of a processor so as to improve the image scaling effect.
According to an embodiment of the present invention, there is provided an image processing method embodiment, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
In the present embodiment, an image processing method is provided, which can be used for the endoscope described above. Fig. 1 is a flowchart of a first image processing method according to an embodiment of the present invention, as shown in fig. 1, the flowchart including the steps of:
Step S101, a to-be-processed image corresponding to a target object is acquired.
For example, assuming that the target object is a patient, the image to be processed may be a medical image, such as an image of a physiological structure of a target object such as a notch, lesion, blood vessel, stomach, etc.
Step S102, identifying a target region of interest in the image to be processed.
It should be noted that, the target region of interest often has edges with other regions in the image to be processed, and/or, the target region of interest is distinguished from pseudo-color images of other regions in the image to be processed. Therefore, in actual operation, the target region of interest in the image to be processed can be obtained by performing edge detection on the image to be processed; or pseudo-color identification is carried out on the image to be processed to obtain a target region of interest in the image to be processed; or carrying out edge detection and false color recognition on the image to be processed, and obtaining the target region of interest according to the edge detection result and the false color recognition result. In addition, the user can also define the target region of interest in the image to be processed by himself, and the identification method of the target region of interest is not limited.
And step S203, carrying out interpolation processing on the target region of interest based on a first interpolation algorithm to obtain a first processed image.
Specifically, the first interpolation algorithm is a high quality interpolation algorithm. Such as gradient stretching, BQbek algorithms, etc. High quality interpolation algorithms tend to be complex, generally use more computing resources, and consume higher power consumption. But the stretched image has higher quality, and the stretched image is clearer, smoother in lines and less in saw teeth. Therefore, interpolation processing can be performed on the target region of interest in the image to be processed based on the high-quality first interpolation algorithm, so that the image quality and the zoom display effect of the target region of interest can be improved.
Step S204, interpolation processing is carried out on other areas in the image to be processed based on the second interpolation algorithm, so that a second processed image is obtained, and the algorithm performance of the first interpolation algorithm is superior to that of the second interpolation algorithm.
Specifically, the second interpolation algorithm has a smaller algorithm complexity than the first interpolation algorithm. Such as bilinear interpolation algorithms, etc. These algorithms are simpler, generally use less computing resources, consume less power, but have lower image quality after stretching.
It will be appreciated that the first interpolation algorithm is a high-example interpolation algorithm and the second interpolation algorithm is a low-example interpolation algorithm.
It should be noted that, the other regions in the image to be processed are regions in the image to be processed except the target region of interest.
Step S205, fusing the first processing image and the second processing image to obtain a target display image.
Specifically, the processor divides the target region of interest and other regions in the image to be processed into a plurality of layers to be processed in parallel according to the recognition result of the target region of interest, the target region of interest adopts a first interpolation algorithm with better performance to conduct interpolation processing, and the other regions adopt a second interpolation algorithm with relatively weaker performance to conduct interpolation processing. And after the processing of each layer is finished, carrying out fusion processing on the images of each layer. Specifically, images of other areas are placed on the bottom layer, and images of the target region of interest are placed on the top layer, so that an initial fusion image is obtained. And smoothing the initial fusion image through filtering algorithms such as a mean value filtering algorithm or a median filtering algorithm and the like to obtain a target display image.
It can be appreciated that the processing quality of the images of other areas of the image to be processed is poor, and the resources required for the operation are also small. The target region of interest (for example, the part of the blood vessel concerned) of the image to be processed needs to be finely enlarged, the part of the target region of interest with high requirements is extracted and covered on the basic image layer with low image processing effect, and the resolution and the image scaling effect of the image corresponding to the target region of interest can be effectively ensured so as to better observe the target region of interest.
According to the image processing method, the target region of interest in the image to be processed is identified, interpolation processing is conducted on the target region of interest based on a first interpolation algorithm with good algorithm performance, interpolation processing is conducted on other regions of the image to be processed based on a second interpolation algorithm with relatively weak algorithm performance, and then the first processed image and the second processed image obtained through processing are fused to obtain the target display image. Therefore, interpolation processing is not required to be carried out on all pixels of the image to be processed by adopting a high-quality interpolation algorithm, so that the computing resource is saved, and the power consumption is reduced. Meanwhile, the scaling effect of the target region of interest can be ensured, so that the problem of poor image scaling effect caused by limited operation speed of a processor is solved to a certain extent, and the image scaling effect is improved.
Fig. 2 is a flowchart of a second image processing method according to an embodiment of the present invention, as shown in fig. 2, the flowchart including the steps of:
Step S201, a to-be-processed image corresponding to the target object is acquired.
Please refer to step S101 in the embodiment shown in fig. 1 in detail, which is not described herein.
Step S202, identifying a target region of interest in the image to be processed.
In some optional embodiments, the step S202 includes:
Step S2021, performing decoding processing on the image to be processed to obtain original image data.
In order to reduce complexity of the output process, the original image data herein is RAW format data, that is, image data having only a single color component for one pixel, for example, image data having a Red (R) component, a Green (G) component, or a Blue (B) component for one pixel.
In step S2022, a target direction operator of the original image data on the target color component is calculated.
In the RAW format data, the data size of the G component is twice the data size of the R, B component. Therefore, in the present embodiment, the G component may be taken as the target color component to identify the image edge by the image data of the G component.
In addition, the target color component can be adjusted to be an R component or a B component according to actual conditions.
Specifically, as shown in fig. 3, the step S2022 includes:
Step a1, acquiring component image data of the original image data on the target color component.
Alternatively, component image data of the original image data on the G component is acquired.
Step a2, calculating a target direction operator of the plurality of detection directions based on pixel values of the component image data in the plurality of detection directions.
Further, the plurality of detection directions includes: 135 ° direction, 45 ° direction, horizontal direction, and vertical direction; the target direction operators of the detection directions are calculated by the following formulas:
G135=|G(i+1,j-1)-G(i-1,j+1)|;
G45=|G(i+1,j+1)-G(i-1,j-1)|;
G0=|G(i-1,j-1)+G(i+1,j-1)-G(i-1,j+1)-G(i+1,j+1)|/2;
G90=|G(i-1,j-1)+G(i-1,j+1)-G(i+1,j-1)-G(i+1,j+1)|/2;
Wherein G135 represents a 135 ° directional target direction operator, G45 represents a 45 ° directional target direction operator, G0 represents a horizontal directional target direction operator, G90 represents a vertical directional target direction operator, G (i+1, j-1) is a pixel value of the component image data in the i+1th row and the j-1 th column, G (i-1, j+1) is a pixel value of the component image data in the i-1 th row and the j+1th column, G (i+1, j+1) is a pixel value of the component image data in the i+1th row and the j+1th column, and G (i-1, j-1) is a pixel value of the component image data in the i-1 th row and the j-1 th column.
In step S2023, edge detection is performed on the original image data based on the target direction operator, so as to obtain a first region of interest.
Illustratively, the first region of interest may be a pathological region of a gut notch, lesion, blood vessel, or the like.
It should be noted that, according to the target direction operator, whether an edge exists in the original image data may be determined, and based on the existing edge, a first region of interest in the image to be processed may be determined.
Specifically, as shown in fig. 4, the step S2023 includes:
and b1, carrying out matrix division on pixels in the original image data to obtain a plurality of first pixel matrixes.
For example, the pixels in the original image data may be divided into a plurality of pixel matrices with a size of 11×11, i.e. the first pixel matrix is a 11×11 pixel matrix.
And b2, obtaining edge state information corresponding to the first pixel matrix based on the target direction operator corresponding to the first pixel matrix and a preset detection value corresponding to the target direction operator.
Specifically, the step b2 includes: if any target direction operator of the first pixel matrix is larger than a corresponding preset detection value, determining that the edge state information corresponding to the first pixel matrix is edge information in the first pixel matrix; if all the target direction operators of the first pixel matrix are smaller than or equal to the corresponding preset detection values, determining that the edge state information corresponding to the first pixel matrix is that the edge information does not exist in the first pixel matrix.
Illustratively, according to the product requirement, a preset detection value corresponding to a target direction operator whose Vth135 is 135 ° direction, a preset detection value corresponding to a target direction operator whose Vth45 is 45 ° direction, a preset detection value corresponding to a target direction operator whose Vth0 is a horizontal direction, and a preset detection value corresponding to a target direction operator whose Vth90 is a vertical direction are set.
When G0> Vth0, G45< Vth45, G90< Vth90, G135< Vth135, it is determined that the edge state information corresponding to the first pixel matrix is edge information of the first pixel matrix in the horizontal direction.
When G0< Vth0, G45> Vth45, G90< Vth90, and G135< Vth135, it is determined that the edge state information corresponding to the first pixel matrix is edge information of the first pixel matrix in the 45 DEG direction.
When G0< Vth0, G45< Vth45, G90> Vth90, and G135< Vth135, it is determined that the edge state information corresponding to the first pixel matrix is that the first pixel matrix has edge information in the vertical direction.
When G0< Vth0, G45< Vth45, G90< Vth90, and G135> Vth135, it is determined that the edge state information corresponding to the first pixel matrix is edge information of the first pixel matrix in the 135 DEG direction.
The four determination methods described above are set assuming that only one detection direction of edge information exists in the first pixel matrix. In actual operation, for each first pixel matrix, there may be edge information of a plurality of detection directions. Therefore, in addition to the above-mentioned four determination methods, in actual operation, whether the current first pixel matrix has any edge information in the detection direction may be determined according to whether the target direction operator corresponding to the detection direction is greater than the corresponding preset detection value. For example, when G0> Vth0, it is determined that the edge state information corresponding to the first pixel matrix is edge information of the first pixel matrix existing in the horizontal direction; when G45> Vth45, judging that the edge state information corresponding to the first pixel matrix is that the edge information exists in the direction of 45 degrees of the first pixel matrix; when G0> Vth0 and G45> Vth45, it is determined that the edge state information corresponding to the first pixel matrix is that the first pixel matrix has edge information in both the horizontal direction and the 45 ° direction.
And b3, when the edge state information corresponding to any one of the first pixel matrixes is edge information, taking the area corresponding to the first pixel matrix in the original image data as a first region of interest.
As an alternative embodiment, the step b3 includes: when the edge state information corresponding to the first pixel matrix is that the edge information exists in any detection direction, the area corresponding to the first pixel matrix in the original image data is taken as a first interested area.
It can be understood that if the target direction operator in any detection direction is greater than the corresponding preset detection value, it can be determined that the current first pixel matrix has edge information in the detection direction, so that the region corresponding to the current first pixel matrix in the original image data can be used as the first region of interest.
As another alternative embodiment, the step b3 includes: when the edge state information corresponding to the first pixel matrix is that the edge information exists in a plurality of detection directions, the area corresponding to the first pixel matrix in the original image data is taken as a first interested area.
It will be appreciated that if the edge state information characterizes the simultaneous presence of edges in the first pixel matrix in the horizontal direction, the 45 deg. direction, the vertical direction and the 135 deg. direction, the first pixel matrix is determined as a region of interest in the image to be processed.
When the edge state information corresponding to the first pixel matrix is that there is no edge information, the region corresponding to the first pixel matrix in the original image data is taken as the other region.
Step S2024, determining a target region of interest in the image to be processed based on the first region of interest.
It should be noted that, in addition to performing edge detection on the image to be processed/original image data through the target direction operator to obtain a target region of interest in the image to be processed, false color identification may be performed on the image to be processed, and an abnormal region in the image to be processed is identified through the false color to obtain the target region of interest.
As an alternative embodiment, as shown in fig. 5, the step S2024 includes:
And step c1, processing the original image data to obtain target image data in RGB format.
Optionally, the original image data is processed based on a mosaic algorithm to obtain target image data in RGB format.
And c2, calculating heme concentration values corresponding to all pixels in the target image data.
Specifically, the heme concentration value is calculated by the following formula:
Where IHB is a heme concentration value, P is a red component or a blue component of a pixel in the target image data, G is a green component of a pixel in the target image data, and C is a constant.
Alternatively, C may be 32, or may be another value, which may be adjusted according to the actual situation.
And c3, determining a preset pseudo color display range.
Alternatively, the pseudo color display range may be divided into a wide dynamic range and a low dynamic range, and the pseudo color display range may be determined according to the application of the system. Illustratively, the wide dynamic range is 16.5-86.5 and the low dynamic range is 32.5-71.5.
Illustratively, when a wide dynamic range is selected as the preset pseudo color display range, pixels of IHB >16.5 are marked; when a low dynamic range is selected as the pseudo-color display range, pixels of IHB >32.5 are marked. The pseudo color display range can be adjusted according to the actual endoscope type.
And c4, performing pseudo-color identification on the original image data based on the heme concentration value and a preset pseudo-color display demonstration range to obtain a second region of interest.
Specifically, the step c4 includes: if the heme concentration value is in the preset pseudo color display range, marking the pixel corresponding to the heme concentration value; performing matrix division on pixels in target image data to obtain a plurality of second pixel matrixes; determining whether marked pixels exist in each second pixel matrix; if any second pixel matrix has marked pixels, the area corresponding to the second pixel matrix in the target image data is taken as a second interested area.
For example, the pixels in the target image data may be divided into a plurality of pixel matrices having a size of 11×11, i.e. the second pixel matrix is a 11×11 pixel matrix. If the marked pixels exist in the second pixel matrix, taking the region corresponding to the second pixel matrix in the target image data as a second interested region, namely a key region; if the marked pixels do not exist in the second pixel matrix, the area corresponding to the second pixel matrix in the target image data is taken as other areas, namely non-key areas.
In the actual operation, edge detection and false color recognition can be performed on the image to be processed in parallel to obtain the first region of interest and the second region of interest. It will be appreciated that edge detection may preserve physiological structure fine display and that false colors may allow abnormal areas to be fine displayed.
And c5, fitting the first region of interest and the second region of interest to obtain a third region of interest.
It should be noted that, the first region of interest and the second region of interest may be fitted in a merging manner to obtain the third region of interest. The first region of interest and the second region of interest may also be fitted in a manner that preserves the same region, resulting in a third region of interest, where the fitting manner is not limited.
And c6, taking the region corresponding to the third region of interest in the image to be processed as a target region of interest.
It is worth to say that, for the endoscope, the invention recognizes the focus, blood vessel and other interested areas in the endoscope diagnosis by means of edge detection and false color recognition, and carries out high-quality interpolation processing on the interested areas so as to solve the problem that the effect is poor when the high-resolution image sensor is used in the current endoscope and the electronic amplification is adapted to high-resolution output. The method and the device have the advantages that the best image display effect is achieved under the condition that the processor is not replaced, the production cost is reduced, and the user experience is improved.
Illustratively, as shown in fig. 6, the endoscope 100 includes a scope end 110, a handpiece 120, a light source 130, an image processing device 140, a display device 150, and a dolly (not shown in the figure). The lens body 110 includes an image sensor and a light source photosensitive part. As an example, in order to clearly describe the execution process of the image processing method of the present invention, the following description will be given with a specific example, as shown in fig. 7, the image sensor in the mirror end 110 transmits the image to be processed to the image processing apparatus 140 after acquiring the image to be processed, and the image processing apparatus 140 executes the following image processing process:
1. And performing signal decoding processing on the image to be processed to obtain RAW image data in a RAW format.
2. And carrying out edge detection on the original image to obtain a first region of interest.
Illustratively, the edge detection can easily identify the focal region of the intestinal incision, and then the image processing device 140 performs fitting of the region of interest according to the edge state information, that is, marks the pathological region, for example, 1080p image, and uses a 1-bit array of 1920×1080 to represent, 0 is a non-pathological region (i.e. other region), and 1 is a pathological region (i.e. first region of interest).
3. And carrying out RAW2RGB processing on the original image data, namely converting the RAW format image into RGB format target image data.
4. And performing pseudo-color identification (namely IHB operation) on the target image data to obtain a second region of interest.
5. Fitting the first region of interest and the second region of interest to obtain a target region of interest.
6. And carrying out interpolation processing on the target region of interest by adopting a complex algorithm to obtain a first processed image.
7. And carrying out interpolation processing on other areas by adopting a simple algorithm to obtain a second processed image.
8. And performing image fusion on the first processing image and the second processing image to obtain an initial fusion image.
9. And carrying out smoothing treatment on the initial fusion image to obtain a target display image.
10. The target display image is output to the display device 150 so that the display device 150 outputs the display target display image.
The present embodiment also provides an image processing apparatus, which is used to implement the foregoing embodiments and preferred embodiments, and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides an image processing apparatus, which is applied to an endoscope, as shown in fig. 8, including:
an image acquisition module 301, configured to acquire an image to be processed corresponding to a target object;
a region identification module 302, configured to identify a target region of interest in an image to be processed;
The first processing module 303 is configured to perform interpolation processing on the target region of interest based on a first interpolation algorithm, so as to obtain a first processed image;
The second processing module 304 is configured to perform interpolation processing on other areas in the image to be processed based on a second interpolation algorithm to obtain a second processed image, where the algorithm performance of the first interpolation algorithm is better than that of the second interpolation algorithm;
The image fusion module 305 is configured to fuse the first processed image and the second processed image to obtain a target display image.
In some alternative embodiments, the region identification module 302 includes:
The image decoding unit is used for decoding the image to be processed to obtain original image data;
an operator calculation unit for calculating a target direction operator of the original image data on the target color component;
The edge detection unit is used for carrying out edge detection on the original image data based on the target direction operator to obtain a first region of interest;
and the region determining unit is used for determining a target region of interest in the image to be processed based on the first region of interest.
In some alternative embodiments, the edge detection unit includes:
A pixel matrix dividing subunit, configured to perform matrix division on pixels in the original image data to obtain a plurality of first pixel matrices;
the edge information generation subunit is used for obtaining edge state information corresponding to the first pixel matrix based on the target direction operator corresponding to the first pixel matrix and a preset detection value corresponding to the target direction operator;
and the first region identification subunit is used for taking a region corresponding to the first pixel matrix in the original image data as a first region of interest when the edge state information corresponding to any first pixel matrix is edge information.
In some alternative embodiments, the edge information generation subunit is specifically configured to:
If any target direction operator of the first pixel matrix is larger than a corresponding preset detection value, determining that the edge state information corresponding to the first pixel matrix is edge information in the first pixel matrix;
if all the target direction operators of the first pixel matrix are smaller than or equal to the corresponding preset detection values, determining that the edge state information corresponding to the first pixel matrix is that the edge information does not exist in the first pixel matrix.
In some alternative embodiments, the operator computing unit includes:
a component image acquisition subunit for acquiring component image data of the original image data on the target color component;
a direction operator calculating subunit for calculating a target direction operator for the plurality of detection directions based on pixel values of the component image data in the plurality of detection directions.
In some alternative embodiments, the plurality of detection directions includes: 135 ° direction, 45 ° direction, horizontal direction, and vertical direction; the direction operator calculation subunit calculates target direction operators of the plurality of detection directions by the following formula:
G135=|G(i+1,j-1)-G(i-1,j+1)|;
G45=|G(i+1,j+1)-G(i-1,j-1)|;
G0=|G(i-1,j-1)+G(i+1,j-1)-G(i-1,j+1)-G(i+1,j+1)|/2;
G90=|G(i-1,j-1)+G(i-1,j+1)-G(i+1,j-1)-G(i+1,j+1)|/2;
Wherein G135 represents a 135 ° directional target direction operator, G45 represents a 45 ° directional target direction operator, G0 represents a horizontal directional target direction operator, G90 represents a vertical directional target direction operator, G (i+1, j-1) is a pixel value of the component image data in the i+1th row and the j-1 th column, G (i-1, j+1) is a pixel value of the component image data in the i-1 th row and the j+1th column, G (i+1, j+1) is a pixel value of the component image data in the i+1th row and the j+1th column, and G (i-1, j-1) is a pixel value of the component image data in the i-1 th row and the j-1 th column.
In some alternative embodiments, the area determining unit includes:
an image color conversion subunit, configured to process the original image data to obtain target image data in RGB format;
A heme calculating subunit, configured to calculate a heme concentration value corresponding to each pixel in the target image data;
A pseudo color range determining subunit, configured to determine a preset pseudo color display range;
The second region identification subunit is used for carrying out pseudo-color identification on the original image data based on the heme concentration value and a preset pseudo-color display demonstration range to obtain a second region of interest;
The key region fitting subunit is used for fitting the first region of interest and the second region of interest to obtain a third region of interest;
and the key region determining subunit is used for taking a region corresponding to the third region of interest in the image to be processed as a target region of interest.
In some alternative embodiments, the second region identification subunit is specifically configured to:
if the heme concentration value is in the preset pseudo color display range, marking the pixel corresponding to the heme concentration value;
Performing matrix division on pixels in target image data to obtain a plurality of second pixel matrixes;
determining whether marked pixels exist in each second pixel matrix;
if any second pixel matrix has marked pixels, the area corresponding to the second pixel matrix in the target image data is taken as a second interested area.
In some alternative embodiments, the heme calculation subunit calculates the heme concentration value by the following formula:
Where IHB is a heme concentration value, P is a red component or a blue component of a pixel in the target image data, G is a green component of a pixel in the target image data, and C is a constant.
The image processing apparatus in this embodiment is presented in the form of functional modules, where the modules refer to Application SPECIFIC INTEGRATED circuits (ASIC for short), processors and memories that execute one or more software or firmware programs, and/or other devices that can provide the above described functions.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The embodiment of the invention also provides computer equipment, which is provided with the image processing device shown in the figure 8.
Referring to fig. 9, fig. 9 is a block diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 9, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 9.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created from the use of the computer device of the presentation of a sort of applet landing page, and the like. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example by a bus connection in fig. 9.
The input device 30 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.
Claims (12)
1. An image processing method, the method comprising:
acquiring an image to be processed corresponding to a target object;
Identifying a target region of interest in the image to be processed;
performing interpolation processing on the target region of interest based on a first interpolation algorithm to obtain a first processed image;
Performing interpolation processing on other areas in the image to be processed based on a second interpolation algorithm to obtain a second processed image, wherein the algorithm performance of the first interpolation algorithm is superior to that of the second interpolation algorithm;
And fusing the first processing image and the second processing image to obtain a target display image.
2. The method of claim 1, wherein the identifying a target region of interest in the image to be processed comprises:
Decoding the image to be processed to obtain original image data;
calculating a target direction operator of the original image data on a target color component;
Performing edge detection on the original image data based on the target direction operator to obtain a first region of interest;
and determining a target region of interest in the image to be processed based on the first region of interest.
3. The method according to claim 2, wherein the edge detection of the raw image data based on the target direction operator, to obtain a first region of interest, comprises:
Performing matrix division on pixels in the original image data to obtain a plurality of first pixel matrixes;
Obtaining edge state information corresponding to the first pixel matrix based on a target direction operator corresponding to the first pixel matrix and a preset detection value corresponding to the target direction operator;
and when the edge state information corresponding to any one of the first pixel matrixes is edge information, taking an area corresponding to the first pixel matrix in the original image data as the first region of interest.
4. The method of claim 3, wherein the obtaining the edge state information corresponding to the first pixel matrix based on the target direction operator corresponding to the first pixel matrix and the preset detection value corresponding to the target direction operator includes:
If any one of the target direction operators of the first pixel matrix is larger than the corresponding preset detection value, determining that the edge state information corresponding to the first pixel matrix is edge information in the first pixel matrix;
If all the target direction operators of the first pixel matrix are smaller than or equal to the corresponding preset detection values, determining that the edge state information corresponding to the first pixel matrix is that no edge information exists in the first pixel matrix.
5. The method of claim 2, wherein said calculating a target direction operator for the raw image data on a target color component comprises:
acquiring component image data of the original image data on the target color component;
A target direction operator for a plurality of detection directions is calculated based on pixel values of the component image data in the plurality of detection directions.
6. The method of claim 5, wherein the plurality of detection directions comprises: 135 ° direction, 45 ° direction, horizontal direction, and vertical direction; the target direction operators of the detection directions are calculated by the following formulas:
G135=|G(i+1,j-1)-G(i-1,j+1)|;
G45=|G(i+1,j+1)-G(i-1,j-1)|;
G0=|G(i-1,j-1)+G(i+1,j-1)-G(i-1,j+1)-G(i+1,j+1)|/2;
G90=|G(i-1,j-1)+G(i-1,j+1)-G(i+1,j-1)-G(i+1,j+1)|/2;
Wherein G135 represents the target direction operator in the 135 ° direction, G45 represents the target direction operator in the 45 ° direction, G0 represents the target direction operator in the horizontal direction, G90 represents the target direction operator in the vertical direction, G (i+1, j-1) represents the pixel value of the component image data in the i+1th row and the j-1 th column, G (i-1, j+1) represents the pixel value of the component image data in the i-1 th row and the j+1th column, G (i+1, j+1) represents the pixel value of the component image data in the i+1th row and the j+1th column, and G (i-1, j-1) represents the pixel value of the component image data in the i-1 th row and the j-1 th column.
7. The method according to any one of claims 2 to 6, wherein the determining a target region of interest in the image to be processed based on the first region of interest comprises:
Processing the original image data to obtain target image data in RGB format;
calculating heme concentration values corresponding to all pixels in the target image data;
Determining a preset pseudo color display range;
Performing pseudo color identification on the original image data based on the heme concentration value and the preset pseudo color display range to obtain a second region of interest;
fitting the first region of interest and the second region of interest to obtain a third region of interest;
And taking a region corresponding to the third region of interest in the image to be processed as the target region of interest.
8. The method of claim 7, wherein performing the pseudo-color recognition on the raw image data based on the heme concentration value and the preset pseudo-color display range to obtain a second region of interest comprises:
if the heme concentration value is in the preset pseudo color display range, marking the pixel corresponding to the heme concentration value;
performing matrix division on pixels in the target image data to obtain a plurality of second pixel matrixes;
determining whether marked pixels exist in each second pixel matrix;
And if any marked pixel exists in the second pixel matrix, taking the region corresponding to the second pixel matrix in the target image data as the second region of interest.
9. The method of claim 7, wherein the heme concentration value is calculated by the formula:
wherein IHB is the heme concentration value, P is the red component or the blue component of the pixel in the target image data, G is the green component of the pixel in the target image data, and C is a constant.
10. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be processed corresponding to the target object;
the region identification module is used for identifying a target region of interest in the image to be processed;
the first processing module is used for carrying out interpolation processing on the target region of interest based on a first interpolation algorithm to obtain a first processed image;
The second processing module is used for carrying out interpolation processing on other areas in the image to be processed based on a second interpolation algorithm to obtain a second processed image, and the algorithm performance of the first interpolation algorithm is superior to that of the second interpolation algorithm;
And the image fusion module is used for fusing the first processing image and the second processing image to obtain a target display image.
11. A computer device, comprising:
A memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the image processing method of any of claims 1 to 9.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon computer instructions for causing a computer to execute the image processing method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410160426.7A CN118037542A (en) | 2024-02-04 | 2024-02-04 | Image processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410160426.7A CN118037542A (en) | 2024-02-04 | 2024-02-04 | Image processing method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118037542A true CN118037542A (en) | 2024-05-14 |
Family
ID=91001718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410160426.7A Pending CN118037542A (en) | 2024-02-04 | 2024-02-04 | Image processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118037542A (en) |
-
2024
- 2024-02-04 CN CN202410160426.7A patent/CN118037542A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11170482B2 (en) | Image processing method and device | |
CN110533609B (en) | Image enhancement method, device and storage medium suitable for endoscope | |
WO2021104056A1 (en) | Automatic tumor segmentation system and method, and electronic device | |
Wang et al. | Smartphone-based wound assessment system for patients with diabetes | |
CN111192356B (en) | Method, device, equipment and storage medium for displaying region of interest | |
US11467661B2 (en) | Gaze-point determining method, contrast adjusting method, and contrast adjusting apparatus, virtual reality device and storage medium | |
WO2018126686A1 (en) | Processing circuit and display method for display screen, and display device | |
JP5826081B2 (en) | Image processing apparatus, character recognition method, and computer program | |
US11087465B2 (en) | Medical image processing apparatus, medical image processing method, and medical image processing program | |
US10748279B2 (en) | Image processing apparatus, image processing method, and computer readable recording medium | |
CN114298900A (en) | Image super-resolution method and electronic equipment | |
WO2018090450A1 (en) | Uniformity measurement method and system for display screen | |
CN108024103A (en) | Image sharpening method and device | |
Li et al. | Underwater Imaging Formation Model‐Embedded Multiscale Deep Neural Network for Underwater Image Enhancement | |
JP2016144049A (en) | Image processing apparatus, image processing method, and program | |
CN113808054A (en) | Method for repairing optic disc region of fundus image and related product | |
RU2012131148A (en) | IMAGE DATA PROCESSING | |
CN118037542A (en) | Image processing method, device, equipment and storage medium | |
JP7265805B2 (en) | Image analysis method, image analysis device, image analysis system, control program, recording medium | |
CN112734701A (en) | Fundus focus detection method, fundus focus detection device and terminal equipment | |
JP2009112507A (en) | Method and apparatus for information control, and endoscope system | |
KR20210026176A (en) | Generating method of labeling image for deep learning | |
US12002215B2 (en) | Method and apparatus for compensating image segmentation lines | |
CN115239651A (en) | Focus calibration method and device | |
JP2006059215A (en) | Rotation angle detector for object, facial rotation angle detection program, and facial rotation angle detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |