CN111861915A - Method and device for eliminating defocusing diffusion effect in microscopic imaging scene - Google Patents

Method and device for eliminating defocusing diffusion effect in microscopic imaging scene Download PDF

Info

Publication number
CN111861915A
CN111861915A CN202010654030.XA CN202010654030A CN111861915A CN 111861915 A CN111861915 A CN 111861915A CN 202010654030 A CN202010654030 A CN 202010654030A CN 111861915 A CN111861915 A CN 111861915A
Authority
CN
China
Prior art keywords
image
matrix
fused
background
diffusion effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010654030.XA
Other languages
Chinese (zh)
Other versions
CN111861915B (en
Inventor
班晓娟
高鸣飞
马博渊
黄海友
印象
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202010654030.XA priority Critical patent/CN111861915B/en
Publication of CN111861915A publication Critical patent/CN111861915A/en
Application granted granted Critical
Publication of CN111861915B publication Critical patent/CN111861915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a method and a device for eliminating an out-of-focus diffusion effect in a microscopic imaging scene, and belongs to the field of image processing and artificial intelligence. The method comprises the following steps: generating a decision matrix according to the image set to be fused; establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix; and generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocusing diffusion effect is eliminated. By adopting the method and the device, the problem that the background generates artifacts in the fusion result caused by the defocusing diffusion effect of the foreground in the multi-focus image fusion task in the microscopic imaging scene can be solved under the condition of not changing the original focusing shooting interval and brightness, so that the imaging accuracy is improved.

Description

Method and device for eliminating defocusing diffusion effect in microscopic imaging scene
Technical Field
The invention relates to the field of image processing and artificial intelligence, in particular to a method and a device for eliminating defocusing diffusion effect in a microscopic imaging scene.
Background
Due to the limitations of the degrees of freedom of the optical lens, it is difficult to focus all objects with widely varying depth distances within one lens. Therefore, a multi-focus image fusion method based on an image processing technology is usually adopted to fuse respective clear areas of a plurality of images focused on different targets in the same scene, and finally obtain a full-clear image. The method is often applied to image processing and analysis tasks in a microscopic imaging scene, namely, due to the fact that the surface of an observed object is uneven, a plurality of images need to be shot by adjusting the focal length of a microscope to obtain focused images of different targets, a multi-focused image fusion method is needed to fuse respective clear areas in the plurality of images, and finally a clear fusion result in the microscopic scene is obtained, so that researchers can clearly observe microstructures.
In practical applications, if there are projections or depressions with too large height/depth difference in the observation target region, these positions will not be able to focus on all positions in the focusing range due to the depth distance exceeding the depth of field of the imaging device, and there is inevitably a situation of out-of-focus region. Taking the case of deep fovea as an example, the generation of the foreground defocusing diffusion effect in the defocusing imaging state will affect the pixel value distribution of the deep fovea region, and finally, a distorted artifact effect is generated at the background deep fovea region in the fusion result.
In view of the above problems, solutions based on hardware or software or a combination of both are usually adopted in the prior art. The hardware-based optimization method mainly comprises the following two methods:
firstly, by enlarging a focusing shooting interval and shooting more images with different focuses, a focusing result of a back depth of field concave part is expected to be found, and finally a clear fusion result is formed. The method can prolong the shooting time, increase the power consumption and wear of hardware operation, and finally reduce the imaging efficiency and even the service life of the precise lens.
And secondly, the defocusing diffusion effect of objects outside the deep concave area is weakened by suppressing the light intensity in imaging. This approach, while reducing the out-of-focus diffusion effect, can result in overall reduced brightness, ultimately resulting in poor overall imaging quality.
In summary, the hardware-based solution has the problems of high hardware modification cost and high technology upgrading difficulty besides the above defects.
The software solution is mainly a Guided Filter (Guided Filter) method, which can make the focus region and the non-focus region smoothly transition, but cannot fundamentally eliminate the defocus diffusion effect. And the guiding filtering method is limited to the fusion of two images, and is difficult to be effectively applied to the application scene of the fusion of a plurality of images.
Therefore, an image processing method and an image processing device are urgently needed, so that the influence of the defocusing diffusion effect of the foreground on the image is suppressed on the basis of not changing the original focusing and shooting interval and brightness, and the imaging quality of the fusion result is improved.
Disclosure of Invention
The embodiment of the invention provides a method and a device for eliminating the defocusing diffusion effect in a microscopic imaging scene, which can solve the problem that the background of a fusion result generates artifacts caused by the defocusing diffusion effect of a foreground in a multi-focus image fusion task in the microscopic imaging scene under the condition of not changing the original focusing shooting interval and brightness, thereby improving the imaging accuracy. The technical scheme is as follows:
in one aspect, a method for eliminating out-of-focus diffusion effect in a microscopic imaging scene is provided, and the method is applied to electronic equipment, and comprises the following steps:
generating a decision matrix according to the image set to be fused, wherein each value in the decision matrix records the number of the image to be fused with the clearest image at the corresponding position;
establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix;
And generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocusing diffusion effect is eliminated.
Further, the generating a decision matrix according to the image set to be fused includes:
and processing the image set to be fused by using a multi-focus image fusion method to generate a decision matrix.
Further, the image set to be fused refers to an image set which is obtained by shooting two or more image combinations with the same size and different focuses in the same scene.
Further, the decision matrix refers to a matrix which has the same size as the size of the image to be fused and has a value range of [1, N ];
and N is the number of the images to be fused in the image set to be fused.
Further, the establishing of the background matrix with the same size as the image to be fused, obtaining the background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix comprises:
s21, establishing a background matrix with the same size as the image to be fused, wherein the initial value of the background matrix is a first threshold value, and initializing the iteration number i to be 1;
s22, establishing a binary candidate matrix with the same size as the image to be fused, wherein the position with the median value i of the decision matrix is 1 in the median value of the binary candidate matrix, and the rest values of the binary candidate matrix are 0;
S23, performing expansion processing on the binary candidate matrix by adopting a morphological transformation method to expand the area with the median value of 1 in the binary candidate matrix;
s24, multiplying the expanded binary candidate matrix by the image to be fused with the serial number i, selecting the pixel value meeting the first condition in the product result and recording the pixel value into the background matrix, wherein the first condition is as follows:
(0< pixel value < T) U (pixel value < pixel value of the position of the background matrix)
Wherein T is a second threshold; 0< pixel value < T means that the location where the pixel value is less than the second threshold is a background pixel;
and S25, increasing the iteration number i by 1, returning to S22 and continuing the iteration until the iteration number i is equal to (N +1) and then stopping the iteration.
Further, the generating an image preliminary fusion result graph according to the decision matrix, and overlaying the result recorded in the background matrix to the image preliminary fusion result to obtain the image fusion result graph after the defocus diffusion effect is eliminated includes:
according to the value of each position in the decision matrix, obtaining the pixel value of the position in the image to be fused corresponding to the value, and splicing to form an image primary fusion result graph;
and covering the value which is not the first threshold value in the background matrix into the image preliminary fusion result to obtain an image fusion result image after the defocusing diffusion effect is eliminated.
In one aspect, a device for eliminating defocus diffusion effect in a microscopic imaging scene is provided, and the device is applied to electronic equipment, and comprises:
the generating unit is used for generating a decision matrix according to the image set to be fused, wherein each value in the decision matrix records the number of the image to be fused which has the clearest imaging at the corresponding position;
the recording unit is used for establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix and recording the background pixels into the background matrix;
and the fusion unit is used for generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocus diffusion effect is eliminated.
In one aspect, an electronic device is provided, where the electronic device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the method for removing out-of-focus diffusion effect in a microscopic imaging scene.
In one aspect, a computer-readable storage medium is provided, where at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to implement the method for removing the out-of-focus diffusion effect in the microscopic imaging scene.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in the embodiment of the invention, a decision matrix is generated according to an image set to be fused, wherein each value in the decision matrix records the number of the image to be fused with the clearest image at the corresponding position; establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix; generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocusing diffusion effect is eliminated; therefore, the problem that artifacts are generated at the background in a fusion result due to the defocusing diffusion effect of the foreground in the multi-focus image fusion task in a microscopic imaging scene can be solved without changing the original focusing shooting interval and brightness, and the imaging accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for eliminating an out-of-focus diffusion effect in a microscopic imaging scene according to an embodiment of the present invention;
fig. 2 is a schematic diagram of images to be fused of different focuses shot by focusing in a microscopic imaging scene according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the defocus diffusion effect provided by the embodiment of the present invention and the effect of eliminating the defocus diffusion effect by using the present invention;
fig. 4 is a schematic structural diagram of an out-of-focus diffusion effect eliminating device in a microscopic imaging scene according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present invention provides a method for eliminating an out-of-focus diffusion effect in a microscopic imaging scene, where the method may be implemented by an electronic device, and the electronic device may be a terminal or a server, and the method includes:
s1, generating a decision matrix according to the image set to be fused, wherein each value in the decision matrix records the number of the image to be fused which has the clearest imaging at the corresponding position;
S2, establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix;
and S3, generating an image preliminary fusion result picture according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result picture after the defocus diffusion effect is eliminated.
According to the method for eliminating the defocusing diffusion effect in the microscopic imaging scene, a decision matrix is generated according to an image set to be fused, wherein each value in the decision matrix records the number of the image to be fused with the clearest imaging at the corresponding position; establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix; generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocusing diffusion effect is eliminated; therefore, the problem that artifacts are generated at the background in a fusion result due to the defocusing diffusion effect of the foreground in the multi-focus image fusion task in a microscopic imaging scene can be solved without changing the original focusing shooting interval and brightness, and the imaging accuracy is improved.
Further, the generating a decision matrix (S1) according to the image set to be fused includes:
and processing the image set to be fused by using a multi-focus image fusion method to generate a decision matrix.
In this embodiment, the multi-focus image fusion method includes: one or more image fusion methods such as a convolution neural network fusion method and a gradient-based fusion method.
In this embodiment, the image set to be fused refers to an image set obtained by shooting two or more image combinations with the same size but different focuses in the same scene. As shown in fig. 2, it can be observed that the target areas with different heights are gradually focused along with the change of the focal length, the rightmost columns in fig. 2 are respectively the focusing effect of two pins, and the other images are the defocusing effect of the pins. In the out-of-focus state of the stitch target area, the pixel value thereof will diffuse to the surrounding area, causing the false appearance of the enlarged stitch, and at the same time, affecting the pixel brightness distribution of the surrounding area, which is called out-of-focus diffusion effect, and easily causing the multi-focus image fusion method to erroneously regard the diffusion result as the focus area during the fusion process, thereby generating the distortion artifact effect in the fusion result, as the area indicated by the arrow in the detail enlarged view of the first row in fig. 3.
In this embodiment, the decision matrix refers to a matrix having the same size as the image to be fused and a value range of [1, N ]; and N is the number of the images to be fused in the image set to be fused.
In this embodiment, the size of the matrix and the size of the image to be fused both refer to pixel sizes, and if the pixel size of the image to be fused is H × W, H is the number of pixels in each column, and W is the number of pixels in each row, the size of the matrix is also H × W, and each element in the decision matrix corresponds to each pixel in the image to be fused.
Further, the establishing a background matrix with the same size as the image to be fused, obtaining a background pixel with a low pixel value in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixel into the background matrix (S2) includes:
s21, establishing a background matrix with the same size as the image to be fused, wherein the initial value of the background matrix is a first threshold value, and initializing the iteration number i to be 1;
in this embodiment, the first threshold is much larger than the maximum pixel value of the image, for example, infinity.
In this embodiment, the number of iterations may be recorded by an iterator.
S22, establishing a binary candidate matrix with the same size as the image to be fused, wherein the position with the median value i of the decision matrix is 1 in the median value of the binary candidate matrix, and the rest values of the binary candidate matrix are 0;
S23, performing expansion processing on the binary candidate matrix by adopting a morphological transformation method to expand the area with the median value of 1 in the binary candidate matrix;
in this embodiment, the morphological dilation operation may select a square structural element with a side length of 50, and in practical applications, the size and shape of the structural element should be selected according to the size of the image to be fused. It is considered that those skilled in the art can modify the shape and size of the structural elements according to their own images without departing from the scope of protection of the present invention.
In this embodiment, the surrounding area of the focus area should also be the focus area (i.e. the candidate area obtained after the dilation operation), and the search for the background pixel with a lower pixel value in the candidate area can improve the search accuracy.
S24, multiplying the expanded binary candidate matrix by the image to be fused with the serial number i, selecting the pixel value meeting the first condition in the product result and recording the pixel value into the background matrix, wherein the first condition is as follows:
(0< pixel value < T) U (pixel value < pixel value of the position of the background matrix)
Wherein T is a second threshold; 0< pixel value < T means that the location where the pixel value is less than the second threshold is a background pixel;
In an embodiment, for example, T takes a value of 30, and if the image to be fused is a grayscale image, the value is directly compared with the second threshold T; and if the image to be fused is a color image, comparing the maximum value of the RGB three channels serving as the pixel value with a second threshold value T. It should be understood that the person skilled in the art may set the second threshold T according to his own image without departing from the scope of protection of the present invention.
And S25, increasing the iteration number i by 1, returning to S22 and continuing the iteration until the iteration number i is equal to (N +1) and then stopping the iteration.
Further, the generating an image preliminary fusion result graph according to the decision matrix, and overlaying the result recorded in the background matrix to the image preliminary fusion result to obtain the image fusion result graph (S3) after the defocus diffusion effect is eliminated includes:
s31, according to the value of each position in the decision matrix, obtaining the pixel value of the position in the image to be fused corresponding to the value, and splicing to form an image preliminary fusion result graph;
and S32, covering the value which is not the first threshold value in the background matrix into the image preliminary fusion result to obtain an image fusion result graph after the defocus diffusion effect is eliminated.
In this embodiment, a value that is not infinite in the background matrix is overlaid on a corresponding position in the preliminary image fusion result to obtain an optimized image fusion result graph, as shown in the second row in fig. 3, and it can be known from the schematic diagram of the amplification effect at the detail position shown in the second row in fig. 3 that the defocus diffusion effect can be obviously eliminated compared with the fusion result at the first row.
In this embodiment, since in the microscopic imaging scene, the deep background pit is far from the lens so that there is little reflection of light, the pixel value of the pixel at this position is usually 0. And the defocus diffusion effect usually affects the surrounding area of the focus area, and because the target area adjacent to the back depth-of-field recess does not generate the defocus diffusion effect during focusing, the area surrounding the focus area is obtained as a candidate area during focusing through morphological expansion operation, a real background depth-of-field recess area can be quickly and accurately found as robustly as possible by searching an area with a low pixel value as a background area in the candidate area, and then the background matrix is covered in the image preliminary fusion result image to inhibit the defocus diffusion effect generated by the target area adjacent to the back depth-of-field recess when the target area is unfocused from affecting the back depth-of-field recess, so that the problem that the background area generates artifact in the fusion result in the multi-focus image fusion task under the microscopic imaging scene can be solved under the condition of not changing the original focus shooting interval and brightness, the imaging accuracy is improved.
The present invention further provides a specific embodiment of a device for eliminating out-of-focus diffusion effect in a microscopic imaging scene, which corresponds to the specific embodiment of the method for eliminating out-of-focus diffusion effect in a microscopic imaging scene, and the device for eliminating out-of-focus diffusion effect in a microscopic imaging scene can achieve the purpose of the present invention by executing the process steps in the specific embodiment of the method, so the explanation in the specific embodiment of the method for eliminating out-of-focus diffusion effect in a microscopic imaging scene is also applicable to the specific embodiment of the device for eliminating out-of-focus diffusion effect in a microscopic imaging scene, and will not be repeated in the specific embodiment of the present invention.
As shown in fig. 4, an embodiment of the present invention further provides a device for eliminating out-of-focus diffusion effect in a microscopic imaging scene, including:
the generating unit 11 is configured to generate a decision matrix according to the image set to be fused, where each value in the decision matrix records a number of an image to be fused having the clearest image at a corresponding position;
the recording unit 12 is configured to establish a background matrix with the same size as the image to be fused, obtain background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and record the background pixels into the background matrix;
and the fusion unit 13 is configured to generate an image preliminary fusion result map according to the decision matrix, and overlay the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result map from which the defocus diffusion effect is eliminated.
The device for eliminating the defocusing diffusion effect in the microscopic imaging scene generates a decision matrix according to an image set to be fused, wherein each value in the decision matrix records the number of the image to be fused with the clearest imaging at the corresponding position; establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix; generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocusing diffusion effect is eliminated; therefore, the problem that artifacts are generated at the background in a fusion result due to the defocusing diffusion effect of the foreground in the multi-focus image fusion task in a microscopic imaging scene can be solved without changing the original focusing shooting interval and brightness, and the imaging accuracy is improved.
Fig. 5 is a schematic structural diagram of an electronic device 600 according to an embodiment of the present invention, where the electronic device 600 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where the memory 602 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 601 to implement the defocus diffusion effect elimination method in the micro-imaging scenario.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, is also provided that includes instructions executable by a processor in a terminal to perform the method for defocus-spread effect cancellation in a microscopy imaging scenario described above. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A method for eliminating the defocusing diffusion effect in a microscopic imaging scene is characterized by comprising the following steps:
generating a decision matrix according to the image set to be fused, wherein each value in the decision matrix records the number of the image to be fused with the clearest image at the corresponding position;
establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix, and recording the background pixels into the background matrix;
and generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocusing diffusion effect is eliminated.
2. The method for eliminating the out-of-focus diffusion effect under the microscopic imaging scene according to claim 1, wherein the generating the decision matrix according to the image set to be fused comprises:
and processing the image set to be fused by using a multi-focus image fusion method to generate a decision matrix.
3. The method for eliminating the out-of-focus diffusion effect in the microscopic imaging scene according to claim 1 or 2, wherein the image set to be fused refers to an image set which is formed by shooting two or more image combinations with the same size and different focuses in the same scene.
4. The method for eliminating the defocus diffusion effect in the microscopic imaging scene according to claim 1 or 2, wherein the decision matrix refers to a matrix which has the same size as the image to be fused and has a value range of [1, N ];
and N is the number of the images to be fused in the image set to be fused.
5. The method for eliminating the out-of-focus diffusion effect in the microscopic imaging scene according to claim 1, wherein the establishing of the background matrix with the same size as the image to be fused, and the obtaining of the background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix and the recording of the background pixels into the background matrix comprise:
s21, establishing a background matrix with the same size as the image to be fused, wherein the initial value of the background matrix is a first threshold value, and initializing the iteration number i to be 1;
s22, establishing a binary candidate matrix with the same size as the image to be fused, wherein the position with the median value i of the decision matrix is 1 in the median value of the binary candidate matrix, and the rest values of the binary candidate matrix are 0;
S23, performing expansion processing on the binary candidate matrix by adopting a morphological transformation method to expand the area with the median value of 1 in the binary candidate matrix;
s24, multiplying the expanded binary candidate matrix by the image to be fused with the serial number i, selecting the pixel value meeting the first condition in the product result and recording the pixel value into the background matrix, wherein the first condition is as follows:
(0< pixel value < T) U (pixel value < pixel value of the position of the background matrix)
Wherein T is a second threshold; 0< pixel value < T means that the location where the pixel value is less than the second threshold is a background pixel;
and S25, increasing the iteration number i by 1, returning to S22 and continuing the iteration until the iteration number i is equal to (N +1) and then stopping the iteration.
6. The method for eliminating the out-of-focus diffusion effect under the microscopic imaging scene according to claim 1, wherein the step of generating an image preliminary fusion result graph according to the decision matrix and covering the result recorded in the background matrix to the image preliminary fusion result to obtain the image fusion result graph without the out-of-focus diffusion effect comprises the steps of:
according to the value of each position in the decision matrix, obtaining the pixel value of the position in the image to be fused corresponding to the value, and splicing to form an image primary fusion result graph;
And covering the value which is not the first threshold value in the background matrix into the image preliminary fusion result to obtain an image fusion result image after the defocusing diffusion effect is eliminated.
7. A device for eliminating defocusing diffusion effect in a microscopic imaging scene is characterized by comprising:
the generating unit is used for generating a decision matrix according to the image set to be fused, wherein each value in the decision matrix records the number of the image to be fused which has the clearest imaging at the corresponding position;
the recording unit is used for establishing a background matrix with the same size as the image to be fused, obtaining background pixels with low pixel values in the image set to be fused according to the image set to be fused and the decision matrix and recording the background pixels into the background matrix;
and the fusion unit is used for generating an image preliminary fusion result graph according to the decision matrix, and covering the result recorded in the background matrix into the image preliminary fusion result to obtain the image fusion result graph after the defocus diffusion effect is eliminated.
CN202010654030.XA 2020-07-08 2020-07-08 Method and device for eliminating defocusing diffusion effect in microscopic imaging scene Active CN111861915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010654030.XA CN111861915B (en) 2020-07-08 2020-07-08 Method and device for eliminating defocusing diffusion effect in microscopic imaging scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010654030.XA CN111861915B (en) 2020-07-08 2020-07-08 Method and device for eliminating defocusing diffusion effect in microscopic imaging scene

Publications (2)

Publication Number Publication Date
CN111861915A true CN111861915A (en) 2020-10-30
CN111861915B CN111861915B (en) 2021-08-13

Family

ID=73152625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010654030.XA Active CN111861915B (en) 2020-07-08 2020-07-08 Method and device for eliminating defocusing diffusion effect in microscopic imaging scene

Country Status (1)

Country Link
CN (1) CN111861915B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200887A (en) * 2020-10-10 2021-01-08 北京科技大学 Multi-focus image fusion method based on gradient perception
CN116437205A (en) * 2023-06-02 2023-07-14 华中科技大学 Depth of field expansion method and system for multi-view multi-focal length imaging

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120093399A1 (en) * 2010-10-15 2012-04-19 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for enhancing image quality of image captured by using multiple color-filter aperture
CN102982522A (en) * 2012-12-14 2013-03-20 东华大学 Method for realizing real-time fusion of multi-focus microscopic images
CN103308452A (en) * 2013-05-27 2013-09-18 中国科学院自动化研究所 Optical projection tomography image capturing method based on depth-of-field fusion
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108734686A (en) * 2018-05-28 2018-11-02 成都信息工程大学 Multi-focus image fusing method based on Non-negative Matrix Factorization and visual perception
CN110334779A (en) * 2019-07-16 2019-10-15 大连海事大学 A kind of multi-focus image fusing method based on PSPNet detail extraction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120093399A1 (en) * 2010-10-15 2012-04-19 Chung-Ang University Industry-Academy Cooperation Foundation Apparatus and method for enhancing image quality of image captured by using multiple color-filter aperture
CN102982522A (en) * 2012-12-14 2013-03-20 东华大学 Method for realizing real-time fusion of multi-focus microscopic images
CN103308452A (en) * 2013-05-27 2013-09-18 中国科学院自动化研究所 Optical projection tomography image capturing method based on depth-of-field fusion
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108734686A (en) * 2018-05-28 2018-11-02 成都信息工程大学 Multi-focus image fusing method based on Non-negative Matrix Factorization and visual perception
CN110334779A (en) * 2019-07-16 2019-10-15 大连海事大学 A kind of multi-focus image fusing method based on PSPNet detail extraction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈康平: "纤维图像的多焦面融合技术的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200887A (en) * 2020-10-10 2021-01-08 北京科技大学 Multi-focus image fusion method based on gradient perception
CN112200887B (en) * 2020-10-10 2023-08-01 北京科技大学 Multi-focus image fusion method based on gradient sensing
CN116437205A (en) * 2023-06-02 2023-07-14 华中科技大学 Depth of field expansion method and system for multi-view multi-focal length imaging
CN116437205B (en) * 2023-06-02 2023-08-11 华中科技大学 Depth of field expansion method and system for multi-view multi-focal length imaging

Also Published As

Publication number Publication date
CN111861915B (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN111861915B (en) Method and device for eliminating defocusing diffusion effect in microscopic imaging scene
US10755429B2 (en) Apparatus and method for capturing images using lighting from different lighting angles
CN109239900B (en) Full-automatic rapid focusing method for large-field acquisition of microscopic digital image
US10395348B2 (en) Image pickup apparatus, image processing apparatus, and control method of image pickup apparatus
JP2004164624A (en) Method and apparatus for low depth of field image segmentation
JP6772201B2 (en) Image processing methods, image processing devices, programs, and recording media
JP2013042371A (en) Image pickup device and distance information acquisition method
US11282176B2 (en) Image refocusing
JP5818552B2 (en) Image processing apparatus, image processing method, and program
CN112995517A (en) High-precision microscopic image automatic focusing method and system, and computer equipment
JP2013228798A (en) Image processing device, imaging device, endoscope, program and image processing method
KR20170101532A (en) Method for image fusion, Computer program for the same, and Recording medium storing computer program for the same
JP2019016975A (en) Image processing system and image processing method, imaging apparatus, program
Peng et al. Selective bokeh effect transformation
JP2016059051A (en) Imaging device and distance information acquisition method
Liang et al. Guidance network with staged learning for image enhancement
CN112381842B (en) Method and system for acquiring microscopic panoramic image focus mapping surface
CN114785953A (en) SFR-based camera automatic focusing method and device
KR101056051B1 (en) Focus Control and Method of Auto Focus System
Antunes et al. All-in-focus imaging using a series of images on different focal planes
CN114384681A (en) Rapid and accurate automatic focusing method and system for microscope, computer equipment and medium
Favaro Depth from focus/defocus
Lee et al. Automated cell junction tracking with modified active contours guided by SIFT flow
CN117969518B (en) Cell image acquisition method and system
Fu et al. An effective interpretation of defocusing and the corresponding defocus convolution kernel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant