CN106846383B - High dynamic range image imaging method based on 3D digital microscopic imaging system - Google Patents

High dynamic range image imaging method based on 3D digital microscopic imaging system Download PDF

Info

Publication number
CN106846383B
CN106846383B CN201710057799.1A CN201710057799A CN106846383B CN 106846383 B CN106846383 B CN 106846383B CN 201710057799 A CN201710057799 A CN 201710057799A CN 106846383 B CN106846383 B CN 106846383B
Authority
CN
China
Prior art keywords
image
high dynamic
focus
sequence
dynamic range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710057799.1A
Other languages
Chinese (zh)
Other versions
CN106846383A (en
Inventor
郑驰
邱国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Nottingham Ningbo China
Original Assignee
University of Nottingham Ningbo China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Nottingham Ningbo China filed Critical University of Nottingham Ningbo China
Priority to CN201710057799.1A priority Critical patent/CN106846383B/en
Publication of CN106846383A publication Critical patent/CN106846383A/en
Application granted granted Critical
Publication of CN106846383B publication Critical patent/CN106846383B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • G02B21/365Control or image processing arrangements for digital or video microscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a high dynamic range image imaging method of a 3D digital microscopic imaging system, which comprises the steps of generating a high dynamic range image of an object to be observed, acquiring an original high dynamic multi-focus sequence image of a sample to be observed, carrying out image registration and movement on a super-pixel level by using a phase matching method and Fourier transform, and segmenting a target object by using a foreground background segmentation method; performing quadtree decomposition processing on the segmented image, marking a clear image block in an image sequence, and recording height information corresponding to each image; and finally, fusing the marked clear image blocks into a three-dimensional shape of the logistics to be observed, and filtering the generated three-dimensional shape by adopting median filtering to eliminate the sawtooth effect of the three-dimensional shape caused by insufficient sampling frequency, so that the generated three-dimensional shape of the object to be observed is smoother.

Description

High dynamic range image imaging method based on 3D digital microscopic imaging system
Technical Field
The invention relates to the technical field of high-definition high-precision microscopic imaging detection, in particular to a high-dynamic-range image imaging method based on a 3D digital microscopic imaging system.
Background
The multi-Focus 3D (SFF) technique is a commonly used 3D technique in the field of digital microscopic image processing. The multi-focus 3D technology is widely focused by experts because it only needs to apply a conventional monocular microscope to obtain the three-dimensional shape of an observation sample. The method is different from the stereoscopic vision technology, depth information is obtained by using a binocular, and the multi-focal-length 3D technology detects a clear area in an image only by moving and observing the distance from an object to the lens, so that the depth information of the object can be restored and reconstructed.
However, the main drawback of the multi-focal-length 3D technology is that when there is a high light reflection condition in the observation sample, the image details are insufficient in some regions due to insufficient dynamic range of the acquired image, and even the image in some regions has no details, which greatly affects the accuracy of the three-dimensional shape of the object after reconstruction. However, at present, many scientific researches still mainly focus on the influence of the focusing factor on the reconstruction accuracy of the three-dimensional shape of the object, but neglect the influence of the quality of the original image in terms of the dynamic range on the reconstruction result of the three-dimensional shape of the object.
In order to overcome the influence of insufficient dynamic range factors in the obtained image, a high dynamic range imaging technology is proposed. A High-Dynamic Range (HDR) image can be obtained by using a High-Dynamic Range imaging technique. Through calibration, images of the same scene with different exposure times are fused, and a 32-bit high dynamic range illumination spectrum of the scene can be obtained. These 32-bit spectral images can accurately and truly reflect the dynamic range in the scene, and then these 32-bit spectral images are mapped to 8-bit ordinary images through local tone mapping, thereby facilitating the conventional display devices to display and save these 8-bit ordinary images. However, due to the high computational complexity of high dynamic range imaging techniques, the current commercial microscopic 3D reconstruction methods still have limitations in implementing this technique.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a high dynamic range image imaging method based on a 3D digital microscopic imaging system in view of the above prior art. The high dynamic range image imaging method can overcome the defect that the existing image imaging method cannot shoot a high dynamic scene, and can simultaneously and accurately generate the three-dimensional shape of an object to be observed, thereby providing all-around 3D stereoscopic vision enjoyment for an observer.
The technical scheme adopted by the invention for solving the technical problems is as follows: the high dynamic range image imaging method based on the 3D digital microscopic imaging system is characterized by comprising the following steps of:
step 1, aiming at an object to be observed on a microscope objective table, obtaining a high dynamic multi-focus sequence image required by three-dimensional stereo imaging by adjusting the height of the objective table and utilizing a camera to obtain a high dynamic multi-focus image of each layer from the bottom of the object to be observed to the top of the object to be observed;
step 2, registering the obtained original high-dynamic multi-focus sequence images by adopting a phase matching method so as to enable the spatial positions, the scaling scales and the image sizes of the front and back connected image pairs in the original high-dynamic multi-focus sequence images to be correspondingly consistent, thereby obtaining the registered high-dynamic multi-focus sequence images;
step 3, aiming at the registered high-dynamic multi-focusing sequence image, extracting an observation sample region needing to generate a three-dimensional body by adopting a background accumulation foreground background segmentation method;
step 4, the observation sample region is segmented by adopting a quadtree segmentation method, a clear part in each image of the high-dynamic multi-focusing sequence image is detected, and height information corresponding to each image is recorded;
and 5, fusing the clear parts in the detected images so as to generate the three-dimensional shape of the object to be observed.
Further, the process of acquiring a high dynamic range image of each slice by using a camera in step 1 includes:
(a) calibrating a corresponding curve of the camera; (b) acquiring images of different exposure values in the same scene; (c) generating a 32-bit illumination spectrogram of the scene by using the corresponding curve of the calibrated camera; (d) and mapping the 32-bit illumination spectrogram to an 8-bit common image by using local tone mapping, and storing the common image into a format which can be displayed and stored by a computer.
Further, in step 1, the obtaining process of the original high dynamic multi-focus sequence image includes: firstly, changing the distance between an object to be observed and an objective lens of a microscope by moving the height of an objective table to realize different focusing plane image sequences of a monocular microscope; secondly, recording the requirement of height information of each focusing plane image; and thirdly, carrying out focusing detection on each focusing plane image, and recording pixel points with the maximum focusing definition in each focusing plane image for subsequent three-dimensional shape reconstruction.
Specifically, in step 2, the process of registering the obtained original high-dynamic multi-focus sequence image by the phase matching method includes:
firstly, in the original high-dynamic poly-aggregation sequence image, aiming at each two image pairs which are connected in front and back, each image in the image pair is converted into a gray image, so that a gray image pair is obtained;
secondly, extracting phase information of each frequency band from the converted gray scale image pair by adopting a complex band-pass filter;
thirdly, the extracted phase information is utilized to realize the movement of the gray image on a super-pixel level through Fourier transformation so as to ensure the consistency of the positions of two front and back images;
finally, this process is repeated for each set of image pairs in the original high dynamic multi-focus sequence of images until the scaling and displacement of all images in the high dynamic multi-focus image sequence remain consistent.
Specifically, the process of segmenting the observation sample region by using the quadtree segmentation method in the step 4 includes:
firstly, inputting an original high-dynamic multi-focus sequence image into a quad-tree as a layer of a quad-tree root;
secondly, setting an image decomposition condition, and processing according to whether each layer of image in the quadtree meets the decomposition condition:
if the image decomposition condition is met for one layer of image, carrying out quad decomposition on the layer of image, and inputting the image to the next layer of the quad tree; repeating the operation until the minimum image block obtained by decomposing the image sequence does not meet the image decomposition condition, and ending the quadtree decomposition process; wherein the set image decomposition conditions are as follows:
respectively calculating the maximum difference value MDFM and the gradient difference value SMDG of the focusing factor of each decomposed image block in the image sequence in the quad-tree; the calculation formulas of the focusing factor maximum difference value MDFM and the gradient difference value SMDG are respectively as follows:
MDFM=FMmax-FMmin
Figure BDA0001216961600000031
wherein, FMmaxRepresenting the maximum value of the focus measurement, FMminRepresents the minimum of the focus measurements; gradmax(x,y) represents the maximum gradient value, gradmin(x, y) represents a minimum gradient value;
aiming at an image block of a layer in a quadtree, if the MDFM is more than or equal to 0.98 multiplied by SMDG, and a completely focused image block exists in the surface image sequence of the layer, the image block of the layer cannot be continuously decomposed downwards; otherwise, the decomposition of the layer image block continues until all images in the quadtree are decomposed into sub image blocks that cannot be decomposed.
In particular, the maximum value FM of the focal length measurementmaxMinimum value FM of focus measurementminThe acquisition process comprises the following steps:
firstly, calculating a gradient matrix of each pixel in a layer of image of a quad tree root, wherein the calculation formula is as follows:
GMi=gradient(Ii),i=1,2,…,n;
wherein, IiFor the ith original high dynamic multi-focus image, GMiIs a reaction ofiA corresponding gradient matrix; n is the total number of images in the original high dynamic multi-focus sequence image;
secondly, the maximum gradient matrix and the minimum gradient matrix in all gradient matrices of each point of the image of the layer are found, and the formula is as follows:
GMmax=max(GMi(x,y)),i=1,2,…,n;
GMmin=min(GMi(x,y)),i=1,2,…,n;
thirdly, calculating the sum of the gradient matrixes of all the points of the image of the layer, wherein the calculation formula is as follows:
FMi=ΣxΣygradi(x,y),i=1,2,…,n;
finally, respectively finding out the maximum value and the minimum value of the sum of the gradient matrixes, wherein the calculation formula is as follows:
FMmax=max{FMi},i=1,2,…,n;
FMmin=min{FMi},i=1,2,…,n。
specifically, the process of fusing the clear parts of the respective images in step 5 includes: and taking all the obtained clear parts as clear sub-image blocks, respectively recording the height information of the clear sub-image blocks, and fusing all the clear sub-image blocks into a complete three-dimensional image of the observation sample.
In an improvement, the step 5 further includes: and filtering the generated three-dimensional shape by adopting a median filtering method to eliminate the sawtooth effect of the three-dimensional shape caused by insufficient sampling frequency, so that the generated three-dimensional shape is smoother.
Compared with the prior art, the invention has the advantages that:
firstly, the high dynamic range image imaging method provided by the invention adopts high dynamic range imaging, three-dimensional imaging and multi-depth-of-field image fusion technologies, simultaneously obtains image sequences of different exposure times of the same scene, generates a 32-bit illumination spectrogram of the scene, then maps the 32-bit illumination spectrogram to an 8-bit common image by using local tone, and stores the image into a high dynamic range video generating format which can be displayed and stored by a computer, and displays and transmits a high dynamic range microscopic video in real time by using the tone mapping technology, so that an observer can dynamically watch an object to be observed in real time;
secondly, due to the fact that the high dynamic range video technology is high in calculation complexity, the method adopts the methods of phase matching, quadtree segmentation and the like, video signals can be processed in real time, real-time microscopic video display is generated, and therefore calculation complexity is reduced;
thirdly, the high dynamic range image imaging method can observe the object to be observed in real time with high definition, and overcomes the defect that the reflective area and the non-reflective area can not be observed clearly on a high-contrast sample by the conventional image imaging technology;
finally, the high dynamic range image imaging method of the invention can obtain the complete focusing image synthesized by the images with different focuses; in the process of processing images with different focuses, the three-dimensional coordinates of points on the surface of the image are restored by automatically acquiring the depths of the points in the images, and powerful auxiliary guarantee is provided for quality detection of emerging materials.
Drawings
Fig. 1 is a schematic flow chart of a high dynamic range image imaging method based on a 3D digital microscopic imaging system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a 3D digital microscopy imaging system according to one embodiment of the present invention;
FIG. 3 is an original high dynamic multi-focus sequence image corresponding to a metal screw according to a first embodiment;
FIG. 4 is a comparison graph of a high dynamic range image and a normal automatic exposure image of a metal screw obtained in the first embodiment; wherein, the left column is a corresponding high dynamic range image, and the right column is a corresponding common automatic exposure image;
FIG. 5 is a schematic diagram illustrating the extraction of a foreground image according to the first embodiment;
FIG. 6a is a 3D stereoscopic image without image texture mapping generated by using a high dynamic range image according to the first embodiment;
FIG. 6b is a 3D stereoscopic image with image texture mapping generated using a high dynamic range image according to the first embodiment;
FIG. 6c is a 3D stereoscopic image without image texture mapping generated using the original auto-exposure image according to the first embodiment;
FIG. 6D is a 3D stereoscopic image with image texture mapping generated using the original auto-exposure image according to the first embodiment;
FIG. 6e is a true value diagram of a 3D stereoscopic image without image texture mapping according to the first embodiment;
FIG. 6f is a true value diagram of a 3D stereo image with image texture mapping according to an embodiment;
FIG. 7a is a 3D stereoscopic image without image texture mapping generated using a high dynamic range image according to the second embodiment;
FIG. 7b is a 3D stereoscopic image with image texture mapping generated using high dynamic range images according to the second embodiment;
FIG. 7c is a 3D stereoscopic image without image texture mapping generated using the original auto-exposure image according to the second embodiment;
FIG. 7D is a 3D stereoscopic image with image texture mapping generated using the original auto-exposure image according to the second embodiment;
FIG. 7e is a true value diagram of the 3D stereoscopic image without image texture mapping according to the second embodiment;
FIG. 7f is a true value diagram of a 3D stereo image with image texture mapping according to the second embodiment;
FIG. 8 is a graph of square root error versus time for the method of generating 3D stereoscopic shapes using high dynamic range images and without using high dynamic range images according to the second embodiment;
FIG. 9a is a 3D stereoscopic image without image texture mapping generated using a high dynamic range image in the third embodiment;
FIG. 9b is a 3D stereoscopic image with image texture mapping generated using a high dynamic range image according to the third embodiment;
FIG. 9c is a 3D stereoscopic image without image texture mapping generated using the original auto-exposure image according to the third embodiment;
FIG. 9D is a 3D stereoscopic image with image texture mapping generated using the original auto-exposure image according to the third embodiment;
FIG. 9e is a true value diagram of a 3D stereo image without image texture mapping according to the third embodiment;
FIG. 9f is a true value diagram of a 3D stereo image with image texture mapping according to the third embodiment;
FIG. 10 is a plot of a square root error contrast for 3D stereoscopic images generated by the method for generating 3D stereoscopic shapes from high dynamic range images and the method for generating 3D stereoscopic shapes from original auto-exposure images.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
Example one
As shown in fig. 2, the 3D digital microscopic imaging system used in the first embodiment includes a conventional optical microscope, an automatic stage capable of moving in any direction of the X-axis, Y-axis, and Z-axis, a CMOS camera, and a computer. The object to be observed in the first embodiment is a metal screw, and the metal screw is placed on the automatic object stage. Referring to fig. 1, a high dynamic range image imaging method based on a 3D digital microscopic imaging system in the first embodiment includes the following steps:
step 1, aiming at an object to be observed on a microscope objective table, namely a metal screw, a CMOS camera is focused on each layer of the object to be observed, namely each layer of the metal screw, by adjusting the height of the objective table, and a high-dynamic multi-focus image of each layer from the bottom of the metal screw to the top of the metal screw is obtained by using the camera so as to obtain an original high-dynamic multi-focus sequence image required by three-dimensional imaging; the original high dynamic multi-focus sequence image for the metal screw is shown in fig. 3; the process of acquiring the high-dynamic multi-focus image of the object to be observed comprises two processes of acquiring the high-dynamic-range image and acquiring the multi-focus image; specifically, the process of acquiring the high dynamic range image of each layer of the object to be observed includes:
(a) calibrating a corresponding curve of the camera; (b) acquiring images of different exposure values in the same scene; (c) generating a 32-bit illumination spectrogram of the scene by using the corresponding curve of the calibrated camera; (d) and mapping the 32-bit illumination spectrogram to an 8-bit common image by using local tone mapping, and storing the common image into a format which can be displayed and stored by a computer. The high dynamic range image for each layer corresponding to the metal screw is shown in the left column of fig. 4;
step 2, registering the obtained original high-dynamic multi-focus sequence images by adopting a phase matching method so as to ensure that the spatial positions, the scaling scales and the image sizes of the front and back connected image pairs in the original high-dynamic multi-focus sequence images are correspondingly consistent, thereby obtaining the registered high-dynamic multi-focus sequence images; the process of registering the obtained original high-dynamic multi-focus sequence image by the phase matching method comprises the following steps:
firstly, in an original high-dynamic poly-aggregation sequence image, aiming at every two image pairs which are connected in front and back, converting each image in an image pair into a gray image so as to obtain a gray image pair;
secondly, extracting phase information of each frequency band from the converted gray scale image pair by adopting a complex band-pass filter;
thirdly, the movement of the gray image pair on the super-pixel level is realized through Fourier transformation by utilizing the extracted phase information so as to ensure the consistency of the positions of two images which are connected in front and back;
finally, this process is repeated for each set of image pairs in the original high dynamic multi-focus sequence of images until the scaling and displacement of all images in the high dynamic multi-focus image sequence remain consistent.
Step 3, aiming at the registered high-dynamic multi-focusing sequence image, extracting an observation sample region needing to generate a three-dimensional body by adopting a background accumulation foreground background segmentation method; referring to fig. 5, the background corresponding to the metal screw is extracted by using inter-frame difference, and then threshold segmentation is performed, so as to obtain a foreground image;
step 4, segmenting the observation sample region by adopting a quadtree segmentation method, detecting a clear part in each image of the high-dynamic multi-focus sequence image, and recording height information corresponding to each image; wherein,
the following is a description of the quadtree splitting method in the first embodiment:
firstly, inputting an original high-dynamic multi-focus sequence image into a quad-tree as a layer of a quad-tree root;
secondly, setting an image decomposition condition, and processing according to whether each layer of image in the quadtree meets the decomposition condition:
if the image decomposition condition is met for one layer of image, carrying out quad decomposition on the layer of image, and inputting the layer of image to the next layer of the quad tree; and analogizing in sequence, and ending the quadtree decomposition process until the minimum image block obtained by decomposing the image sequence does not meet the image decomposition condition; the image decomposition conditions are described below:
respectively calculating the maximum difference value MDFM and the gradient difference value SMDG of the focusing factor of each decomposed image block in the image sequence in the quad-tree; the calculation formulas of the focusing factor maximum difference value MDFM and the gradient difference value SMDG are respectively as follows:
MDFM=FMmax-FMmin
Figure BDA0001216961600000071
wherein, FMmaxRepresenting the maximum value of the focus measurement, FMminRepresents the minimum of the focus measurements; gradmax(x, y) represents the maximum gradient value, gradmin(x, y) represents a minimum gradient value; maximum value FM for focus measurementmaxMinimum value FM of focus measurementminThe calculation of (a) is:
firstly, calculating a gradient matrix of each pixel in a layer of image of a quad tree root, wherein the calculation formula is as follows:
GMi=gradient(Ii),i=1,2,…,n;
wherein, IiFor the ith original high dynamic multi-focus image, GMiIs a reaction ofiA corresponding gradient matrix; n is the total number of images in the original high dynamic multi-focus sequence image;
secondly, the maximum gradient matrix and the minimum gradient matrix in all gradient matrices of each point of the image of the layer are found, and the formula is as follows:
GMmax=max(GMi(x,y)),i=1,2,…,n;
GMmin=min(GMi(x,y)),i=1,2,…,n;
thirdly, calculating the sum of the gradient matrixes of all the points of the image of the layer, wherein the calculation formula is as follows:
FMi=Σxygradi(x,y),i=1,2,…,n;
finally, respectively finding out the maximum value and the minimum value of the sum of the gradient matrixes, wherein the calculation formula is as follows:
FMmax=max{FMi},i=1,2,…,n;FMmin=min{FMi},i=1,2,…,n。
the process for detecting sharp portions in each image of a high dynamic multi-focus sequence image is explained as follows:
for each image block sequence in the quadtree, finding an image block with the largest gradient matrix in the image block sequence, and recording the position of the image block with the largest gradient matrix in the image sequence and the height information of the image block;
Figure BDA0001216961600000072
1,2, …, n; wherein, fmi(x, y) represents the gradient matrix of the ith image in the image sequence.
Step 5, fusing the clear parts in the detected images so as to generate a three-dimensional shape of the object to be observed, namely a three-dimensional image of the metal screw; wherein, the three-dimensional image mark that the metal screw corresponds is set as Z:
Z(x,y)=zi(x,y),zi(x, y) represents the clear image block of the ith image in the image sequence.
In order to compare the conventional method for generating a 3D stereoscopic shape by using an original automatic exposure image with the method for generating a 3D stereoscopic shape by using a high dynamic range image in the present invention, a comparison diagram of the stereoscopic images generated by the metal screws by using the two 3D stereoscopic shape methods is given in the first embodiment, and specifically, see fig. 6. In the present invention, the image technique using the original automatic exposure is denoted as Normal SFF, and the image technique using the high dynamic range is denoted as HDR-SFF. Wherein:
in order to compare the accuracy of the two 3D solid shape generating methods, in the first embodiment of the present invention, a square root error is introduced to measure the difference between the two 3D solid shape generating methods and a true value under the same condition:
Figure BDA0001216961600000081
wherein G isT(i, j) represents the true value, and Z (i, j) represents the Normal SFF or HDR-SFF value.
Table 1 shows the square root error obtained for two 3D solid shape generation methods using 22 different focusing factors. As can be seen from comparing the results in table 1, for the same focusing factor, the focusing factor square root error value corresponding to the 3D stereoscopic shape generated by using the high dynamic range image is smaller than the focusing factor square root error value corresponding to the 3D stereoscopic shape generated without using the high dynamic range image. The results in table 1 show that the 3D stereoscopic shapes generated using the high dynamic range images of the present invention are more accurate than the 3D stereoscopic shapes generated without the high dynamic range images.
Figure BDA0001216961600000082
TABLE 1
Example two
In the second embodiment, a bank card made of a plastic material is used as an object to be observed, and a lower case english letter "d" is provided on the bank card. The steps of generating the three-dimensional stereo image of the bank card are the same as those of generating the three-dimensional stereo image of the metal screw in the first embodiment, and are not described herein again.
In the second embodiment, in order to verify the accuracy and robustness of the high dynamic range image imaging method in the present invention, the second embodiment provides a corresponding high dynamic range image generated by the bank card, and specifically refer to fig. 7a to 7 f. Fig. 8 is a square root error comparison diagram of the bank card in the second embodiment, which is generated by using the high dynamic range image to generate the 3D stereoscopic shape and without using the high dynamic range image.
As can be seen from fig. 8, for the same focusing factor, the focusing factor square root error value corresponding to the 3D stereoscopic shape generated by using the high dynamic range image is smaller than the focusing factor square root error value corresponding to the 3D stereoscopic shape generated without using the high dynamic range image. It can be seen that the 3D stereoscopic shapes generated using the high dynamic range images of the present invention are more accurate than 3D stereoscopic shapes that are not generated using high dynamic range images.
EXAMPLE III
In the third embodiment, a metal chip is used as an object to be observed. The steps of generating the three-dimensional stereo image of the metal chip are the same as those of generating the three-dimensional stereo image of the metal screw in the first embodiment, and are not described herein again.
FIG. 10 is a plot of a square root error contrast for 3D stereoscopic images generated by the method for generating 3D stereoscopic shapes from high dynamic range images and the method for generating 3D stereoscopic shapes from original auto-exposure images.
As can be seen from fig. 10, for the same focusing factor, the focusing factor square root error value corresponding to the 3D stereoscopic shape generated by using the high dynamic range image is smaller than the focusing factor square root error value corresponding to the 3D stereoscopic shape generated without using the high dynamic range image. It can be seen that the 3D stereoscopic shapes generated using the high dynamic range images of the present invention are more accurate than 3D stereoscopic shapes that are not generated using high dynamic range images.

Claims (7)

1. The high dynamic range image imaging method based on the 3D digital microscopic imaging system is characterized by comprising the following steps of:
step 1, aiming at an object to be observed on a microscope objective table, obtaining a high dynamic multi-focus sequence image required by three-dimensional stereo imaging by adjusting the height of the objective table and utilizing a camera to obtain a high dynamic multi-focus image of each layer from the bottom of the object to be observed to the top of the object to be observed; the process of acquiring the high dynamic range image of each layer by using the camera comprises the following steps:
(a) calibrating a corresponding curve of the camera; (b) acquiring images of different exposure values in the same scene; (c) generating a 32-bit illumination spectrogram of the scene by using the corresponding curve of the calibrated camera; (d) mapping the 32-bit illumination spectrogram to an 8-bit common image by using local tone mapping, and storing the common image into a format which can be displayed and stored by a computer;
step 2, registering the obtained original high-dynamic multi-focus sequence images by adopting a phase matching method so as to enable the spatial positions, the scaling scales and the image sizes of the front and back connected image pairs in the original high-dynamic multi-focus sequence images to be correspondingly consistent, thereby obtaining the registered high-dynamic multi-focus sequence images;
step 3, aiming at the registered high-dynamic multi-focusing sequence image, extracting an observation sample region needing to generate a three-dimensional body by adopting a background accumulation foreground background segmentation method;
step 4, the observation sample region is segmented by adopting a quadtree segmentation method, a clear part in each image of the high-dynamic multi-focusing sequence image is detected, and height information corresponding to each image is recorded;
and 5, fusing the clear parts in the detected images so as to generate the three-dimensional shape of the object to be observed.
2. The high dynamic range image imaging method according to claim 1, wherein in step 1, the obtaining of the original high dynamic multi-focus sequence image comprises:
in the step 1, firstly, the distance between an object to be observed and an objective lens of a microscope is changed by moving the height of an objective table, so that different focusing plane image sequences of the monocular microscope are realized; secondly, recording the requirement of height information of each focusing plane image; and thirdly, carrying out focusing detection on each focusing plane image, and recording pixel points with the maximum focusing definition in each focusing plane image for subsequent three-dimensional shape reconstruction.
3. The high dynamic range image imaging method according to claim 1, wherein in step 2, the process of registering the obtained original high dynamic multi-focus sequence image by the phase matching method comprises:
firstly, in the original high-dynamic poly-aggregation sequence image, aiming at each two image pairs which are connected in front and back, each image in the image pair is converted into a gray image, so that a gray image pair is obtained;
secondly, extracting phase information of each frequency band from the converted gray scale image pair by adopting a complex band-pass filter;
thirdly, the extracted phase information is utilized to realize the movement of the gray image on a super-pixel level through Fourier transformation so as to ensure the consistency of the positions of two front and back images;
finally, this process is repeated for each set of image pairs in the original high dynamic multi-focus sequence of images until the scaling and displacement of all images in the high dynamic multi-focus image sequence remain consistent.
4. The high dynamic range image imaging method according to claim 1, wherein the process of segmenting the observation sample region by the quadtree segmentation method in step 4 comprises:
firstly, inputting an original high-dynamic multi-focus sequence image into a quad-tree as a layer of a quad-tree root;
secondly, setting an image decomposition condition, and processing according to whether each layer of image in the quadtree meets the decomposition condition:
if the image decomposition condition is met for one layer of image, carrying out quad decomposition on the layer of image, and inputting the image to the next layer of the quad tree; repeating the operation until the minimum image block obtained by decomposing the image sequence does not meet the image decomposition condition, and ending the quadtree decomposition process; wherein the set image decomposition conditions are as follows:
respectively calculating the maximum difference value MDFM and the gradient difference value SMDG of the focusing factor of each decomposed image block in the image sequence in the quad-tree; the calculation formulas of the focusing factor maximum difference value MDFM and the gradient difference value SMDG are respectively as follows:
MDFM=FMmax-FMmin
Figure FDA0002213129230000021
wherein, FMmaxRepresenting the maximum value of the focus measurement, FMminRepresents the minimum of the focus measurements; gradmax(x, y) represents the maximum gradient value, gradmin(x, y) represents a minimum gradient value;
aiming at an image block of a layer in a quadtree, if the MDFM is more than or equal to 0.98 multiplied by SMDG, and a completely focused image block exists in the surface image sequence of the layer, the image block of the layer cannot be continuously decomposed downwards; otherwise, the decomposition of the layer image block continues until all images in the quadtree are decomposed into sub image blocks that cannot be decomposed.
5. The method of claim 4, wherein the maximum value of the focus measurement FM ismaxMinimum value FM of focus measurementminThe acquisition process comprises the following steps:
firstly, calculating a gradient matrix of each pixel in a layer of image of a quad tree root, wherein the calculation formula is as follows:
GMi=gradient(Ii),i=1,2,…,n;
wherein, IiFor the ith original high dynamic multi-focus image, GMiIs a reaction ofiA corresponding gradient matrix; n is the total number of images in the original high dynamic multi-focus sequence image;
secondly, the maximum gradient matrix and the minimum gradient matrix in all gradient matrices of each point of the image of the layer are found, and the formula is as follows:
GMmax=max(GMi(x,y)),i=1,2,…,n;
GMmin=min(GMi(x,y)),i=1,2,…,n;
thirdly, calculating the sum of the gradient matrixes of all the points of the image of the layer, wherein the calculation formula is as follows:
FMi=∑xygradi(x,y),i=1,2,…,n;
finally, respectively finding out the maximum value and the minimum value of the sum of the gradient matrixes, wherein the calculation formula is as follows:
FMmax=max{FMi},i=1,2,…,n;
FMmin=min{FMi},i=1,2,…,n。
6. the high dynamic range image imaging method according to claim 5, wherein the process of fusing the sharp portions of each image in step 5 comprises: and taking all the obtained clear parts as clear sub-image blocks, respectively recording the height information of the clear sub-image blocks, and fusing all the clear sub-image blocks into a complete three-dimensional image of the observation sample.
7. The high dynamic range image imaging method according to claim 1, wherein said step 5 further comprises: and filtering the generated three-dimensional shape by adopting a median filtering method to eliminate the sawtooth effect of the three-dimensional shape caused by insufficient sampling frequency, so that the generated three-dimensional shape is smoother.
CN201710057799.1A 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system Expired - Fee Related CN106846383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710057799.1A CN106846383B (en) 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710057799.1A CN106846383B (en) 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system

Publications (2)

Publication Number Publication Date
CN106846383A CN106846383A (en) 2017-06-13
CN106846383B true CN106846383B (en) 2020-04-17

Family

ID=59121732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710057799.1A Expired - Fee Related CN106846383B (en) 2017-01-23 2017-01-23 High dynamic range image imaging method based on 3D digital microscopic imaging system

Country Status (1)

Country Link
CN (1) CN106846383B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392946B (en) * 2017-07-18 2020-06-16 宁波永新光学股份有限公司 Microscopic multi-focus image sequence processing method for three-dimensional shape reconstruction
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
CN108470149A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on light-field camera
CN109360163A (en) * 2018-09-26 2019-02-19 深圳积木易搭科技技术有限公司 A kind of fusion method and emerging system of high dynamic range images
CN110197463B (en) * 2019-04-25 2023-01-03 深圳大学 High dynamic range image tone mapping method and system based on deep learning
CN110849266B (en) * 2019-11-28 2021-05-25 江西瑞普德测量设备有限公司 Telecentric lens telecentricity debugging method of image measuring instrument
CN112489196B (en) * 2020-11-30 2022-08-02 太原理工大学 Particle three-dimensional shape reconstruction method based on multi-scale three-dimensional frequency domain transformation
CN117784388B (en) * 2024-02-28 2024-05-07 宁波永新光学股份有限公司 High dynamic range metallographic image generation method based on camera response curve

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946732A (en) * 2011-09-26 2014-07-23 微软公司 Video display modification based on sensor input for a see-through near-to-eye display
CN104224127A (en) * 2014-09-17 2014-12-24 西安电子科技大学 Optical projection tomography device and method based on camera array

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI393992B (en) * 2009-02-05 2013-04-21 Nat Univ Chung Cheng High dynamic range image synthesis method
US9626760B2 (en) * 2014-10-30 2017-04-18 PathPartner Technology Consulting Pvt. Ltd. System and method to align and merge differently exposed digital images to create a HDR (High Dynamic Range) image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946732A (en) * 2011-09-26 2014-07-23 微软公司 Video display modification based on sensor input for a see-through near-to-eye display
CN104224127A (en) * 2014-09-17 2014-12-24 西安电子科技大学 Optical projection tomography device and method based on camera array

Also Published As

Publication number Publication date
CN106846383A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106846383B (en) High dynamic range image imaging method based on 3D digital microscopic imaging system
US9224193B2 (en) Focus stacking image processing apparatus, imaging system, and image processing system
CN109360235B (en) Hybrid depth estimation method based on light field data
Gallo et al. 3D reconstruction of small sized objects from a sequence of multi-focused images
JP6319329B2 (en) Surface attribute estimation using plenoptic camera
Genovese et al. Stereo-digital image correlation (DIC) measurements with a single camera using a biprism
US10623627B2 (en) System for generating a synthetic 2D image with an enhanced depth of field of a biological sample
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
KR102253320B1 (en) Method for displaying 3 dimension image in integral imaging microscope system, and integral imaging microscope system implementing the same
TW201225658A (en) Imaging device, image-processing device, image-processing method, and image-processing program
CN111784781A (en) Parameter determination method, device, equipment and system
CN113538545B (en) Monocular depth estimation method based on electro-hydraulic adjustable-focus lens and corresponding camera and storage medium
CN117392127B (en) Method and device for detecting display panel frame and electronic equipment
CN112833821B (en) Differential geometric three-dimensional micro-vision detection system and method for high-density IC welding spots
CN106842496B (en) Method for automatically adjusting focus based on frequency domain comparison method
WO2014171438A1 (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
CN116645418A (en) Screen button detection method and device based on 2D and 3D cameras and relevant medium thereof
Štolc et al. Depth and all-in-focus images obtained by multi-line-scan light-field approach
Niederöst et al. Shape from focus: fully automated 3D reconstruction and visualization of microscopic objects
CN116310101A (en) High-dynamic three-dimensional measurement method based on self-adaptive distribution of intensity of overexposure connected domain
Burns et al. Sampling efficiency in digital camera performance standards
Gheta et al. Fusion of combined stereo and focus series for depth estimation
Tung et al. Depth extraction from a single image and its application
Averkin et al. Using the method of depth reconstruction from focusing for microscope images
JP3635327B2 (en) High-precision three-dimensional measurement method using focus adjustment mechanism and zoom mechanism and high-precision three-dimensional measurement apparatus therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200417

CF01 Termination of patent right due to non-payment of annual fee