CN109949383B - High dynamic optical projection tomography method and device - Google Patents

High dynamic optical projection tomography method and device Download PDF

Info

Publication number
CN109949383B
CN109949383B CN201910105623.8A CN201910105623A CN109949383B CN 109949383 B CN109949383 B CN 109949383B CN 201910105623 A CN201910105623 A CN 201910105623A CN 109949383 B CN109949383 B CN 109949383B
Authority
CN
China
Prior art keywords
image
image sequence
pixel
gray value
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910105623.8A
Other languages
Chinese (zh)
Other versions
CN109949383A (en
Inventor
韩定安
李秉尧
林秋萍
张艳婷
王雪花
王茗祎
曾亚光
谭海曙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN201910105623.8A priority Critical patent/CN109949383B/en
Publication of CN109949383A publication Critical patent/CN109949383A/en
Application granted granted Critical
Publication of CN109949383B publication Critical patent/CN109949383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a high dynamic optical projection tomography method and a device, which process and fuse images with different exposure time and background images with corresponding exposure time on a frequency domain to obtain complete and real images with high contrast details, and then perform filtering back projection to obtain three-dimensional image reconstruction.

Description

High dynamic optical projection tomography method and device
Technical Field
The invention relates to the technical fields of image processing, three-dimensional imaging and high dynamic imaging, in particular to a high dynamic optical projection tomography method and a device.
Background
Optical projection tomography (optical projection tomography) is a novel three-dimensional (3-D) imaging technique for small biological specimens. Such three-dimensional imaging techniques use optical scavengers to clean or employ transparent samples, and use ultraviolet, visible and near infrared spectra to calculate three-dimensional structures by transmitting photons through the sample. In order to solve the problems of too short response curve range of the camera and large difference of transparency degrees of different areas of the sample, a plurality of high-dynamic imaging methods exist, such as: adopting an HDR sensor (High-Dynamic Range sensor) hardware to collect; linear image solution low-rank matrix high-dynamic fusion; normalization algorithm region exposure time correction. The HDR sensor hardware is difficult to realize due to the excessively high process cost, and a dynamic image sequence fusing a plurality of different exposure times is commonly adopted at the present stage. The linear image sequence solution low-rank matrix is that after extracting low-rank matrix and sparse noise from a batch of linear correlated images, a fusion image is extracted by a low-rank matrix recovery method. The normalization algorithm region exposure time correction method uses the measured camera response curve to linearize the nonlinear region of the response curve, and then uses a plurality of exposure gradients to restore the acquired image.
Taking the above example as an example, the linear image solution low-rank matrix high-dynamic fusion method obtains a high-dynamic image with higher precision by shooting a plurality of groups of static dynamic images and vectorizing each image as column vectors of a data matrix and solving the data matrix by using a non-precise Lagrange multiplier method. However, when the obtained data is small, it is difficult to obtain a matrix with a sufficiently low rank while the variance on other singular value vectors is not large. The normalization algorithm region exposure time correction method corrects the exposure time of different regions by adopting a normalization algorithm, so that the exposure time of each pixel point on an output image is consistent on the basis of overcoming nonlinear response of a camera, the distortion degree of the image is reduced, and an imaging result is more in accordance with the recognition characteristics of human eyes; however, the lower part and the higher part of the fused image have small differences in transparency, for example, the fish body part is similar to the bone region of the sample to be completely transparent, which is not in accordance with the actual situation, and the sample obtained when the OPT back projection algorithm (Optical Projection Tomograthy, optical projection tomography) is performed is low in authenticity.
In summary, the existing high dynamic imaging method of the optical projection tomography three-dimensional imaging technology, such as a method for correcting the exposure time of different areas by adopting a normalization algorithm, has larger error in obtaining the structural information of the sample.
Disclosure of Invention
In order to solve the problems existing in the existing method and realize obtaining a complete three-dimensional structure diagram with real and complete and higher contrast details, the disclosure provides a frequency domain-based high-dynamic three-dimensional imaging method, which can obtain real and complete three-dimensional structure information with higher contrast of a sample, and the method disclosed by the invention processes and fuses images with different exposure times and background images with corresponding exposure times on a frequency domain to obtain a complete and real image with high contrast details, and then performs filtered back projection to obtain three-dimensional image reconstruction.
To achieve the above object, according to an aspect of the present disclosure, there is provided a high dynamic optical projection tomography method comprising the steps of:
step 1, inputting a first image sequence to obtain a second image sequence according to the arrangement of exposure time from small to large;
since the images at higher exposure have an overall gray value that is higher than the one-level exposure lower, the gray values of the images in the first image sequence are added sequentially by pixel position,
Figure BDA0001966672040000021
wherein N is the total number of images in the first image sequence and M is the total number of pixels of the images; in the formula g i Is the sum of pixel gray values of the ith image, f ij A gray value of a j-th pixel of the i-th image;
pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299R+0.587G+0.114B;
From big to small will g i Sequencing; and according to g i The images are arranged, i.e. a second image sequence is obtained with an arrangement of the exposure times from small to large.
Step 2, iteratively segmenting the second image sequence to obtain a foreground image sequence and a background image sequence of the second image sequence;
step 2.1, obtaining the maximum gray value G of each image in the second image sequence MAX And a minimum gray value G MIN ,TR k For the segmentation threshold, k= … M-1, M is the total number of pixels of the image, let the initial threshold
Figure BDA0001966672040000022
Step 2.2, the segmentation threshold TR is greater than or equal to k Is segmented into foreground images, will be less than the segmentation threshold TR k Is divided into background images, and the average gray value G of the foreground images is obtained F And average gray value G of background image B
Step 2.3, obtaining a new threshold
Figure BDA0001966672040000023
Step 2.4, if T k =T k+1 Turning to step 2.5, otherwise turning to step 2.2;
and 2.5, outputting a foreground image sequence and a background image sequence.
The foreground image sequence and the background image sequence obtained by iterative segmentation have good image effects, and the main areas of the foreground and the background of the image can be distinguished based on the iterative threshold value.
Preferably, in step 2, the step of iteratively segmenting the second image sequence to obtain the foreground image sequence and the background image sequence of the second image sequence may be replaced by: the sample is removed and the background image without the sample is directly shot by the CMOS camera and converted into the background image with gray scale, namely the background image sequence is obtained by taking the background image by the CMOS camera and converting the background image into the background image with gray scale, and the gray scale image is converted into the gray scale value f of the pixel in the background image ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299R+0.587G+0.114B。
Step 3, calculating the gray value variance of the background image sequence;
in the OPT theory, an omnidirectional acquisition image of a sample at 360 ° under the same exposure gradient is required to be obtained, and a filtered back projection is performed on the image to obtain a slice diagram of the sample. In a collected image of 360 degrees, a sample image moves within a certain pixel range, namely, the gray value of the collected image changes in the part of pixels; the pixel areas with unchanged corresponding gray values can be used for calculating background difference values, the pixel areas with different exposure gradients at the same angle are different, namely a background image sequence, and the gray value variance of the background image sequence is calculated, wherein the exposure gradients are image sequences acquired at different exposure times, and the exposure time intervals of all images in the image sequence are consistent.
Let Z denote the gray value set of all M pixel points in the background image, Z= (x, y), Z E Z, and M represents the average value of the gray values of the pixels, then the average value
Figure BDA0001966672040000031
Gray value variance of background image sequence +.>
Figure BDA0001966672040000032
i= … N, N being the total number of images in the background image sequence.
And 4, when the gray value variance is larger than the pixel threshold value, performing frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, wherein the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)-m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i= … N, j= … M, N being the total number of images in the first image sequence, M being the total number of pixels of the image, M i (x, y) is a background image obtained after fourier transform processing.
And 5, when the gray value variance is smaller than or equal to the pixel threshold value, performing frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, wherein the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)+m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i= … N, j= … M, N being the total number of images in the first image sequence, M being the total number of pixels of the image, M i (x, y) is a background image obtained after fourier transform processing.
And 6, performing inverse Fourier transform on the third image sequence to obtain a high-dynamic imaging image.
Further, in step 1, the first image sequence is a sequence in which a plurality of collected images of samples are obtained by shooting at different exposure times of 0.38×2n ms (milliseconds) in front of the samples through a CMOS camera, n=1, 2 …,255, the CMOS camera is any one of a UCMOS10000KPA, a UCMOS08000KPA, and a U3CMOS series C interface USB3.0CMOS camera, the image sequences are obtained by arranging the exposure times from small to large, and the collected images are sequentially sequenced into a sequence of one image according to the exposure time when the CMOS camera shoots, that is, the first image sequence, wherein the samples are tissues of the whole small animals, parts of the animals, and tissues of the plants, and the front of the samples is the direction in which the CMOS camera faces the samples.
Further, in step 4 and step 5, the pixel threshold is an integer, the value range is 150-255, and the default value is 200.
The invention also provides a high dynamic optical projection tomography apparatus, the apparatus comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in units of:
the image sequencing unit is used for inputting the first image sequence to obtain a second image sequence according to the arrangement of the exposure time from small to large;
the background segmentation unit is used for iteratively segmenting the second image sequence to obtain a foreground image sequence and a background image sequence of the second image sequence;
a gray variance calculating unit for calculating gray value variance of the background image sequence;
the frequency domain difference processing unit is used for carrying out frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence when the gray value variance is larger than the pixel threshold value;
the frequency domain superposition processing unit is used for carrying out frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence when the gray value variance is smaller than or equal to the pixel threshold value;
and the high dynamic imaging unit is used for carrying out inverse Fourier transform on the third image sequence to obtain a high dynamic imaging image.
The beneficial effects of the present disclosure are: the method has the advantages that the method of overlapping the threshold value difference values on the two-dimensional Fourier plane is adopted, the three-dimensional imaging result obtained through comparison is more abundant in information than the traditional (Optical Projection Tomograthy, optical projection tomography) result, the distribution information of the surface layer and the internal structure of the sample can be completely stored, the detail information with high contrast, which cannot be obtained under the single exposure condition, is obtained, the image which is more real than the method of correcting the exposure time of different areas by adopting the normalization algorithm is obtained, and the fine three-dimensional structure imaging of the sample with the complex space structure is realized.
Drawings
The above and other features of the present disclosure will become more apparent from the detailed description of the embodiments illustrated in the accompanying drawings, in which like reference numerals designate like or similar elements, and which, as will be apparent to those of ordinary skill in the art, are merely some examples of the present disclosure, from which other drawings may be made without inventive effort, wherein:
FIG. 1 is a flow chart of a high dynamic optical projection tomography method;
FIG. 2 is a graph of a comparison of acquired image fusion according to an embodiment of the present disclosure, where FIG. 2 (a) is a graph of the results of frequency domain high dynamic processing over an exposure time of 0.06ms to 1.5ms, FIG. 2 (B) is an acquired image with an exposure time of 0.38ms, FIG. 2 (c) is a graph of pixel value distribution of line A in FIG. 2 (a), FIG. 2 (B), FIG. 2 (d) is a graph of pixel value of line B in FIG. 2 (a), FIG. 2 (B), where line A is row 398 pixels of the acquired image, and line B is row 600 pixels of the acquired image;
FIG. 3 is a three-dimensional slice of an embodiment of the present disclosure, FIG. 3 (a) is a X Y plane along the 450 th slice in the Z-axis direction, FIG. 3 (b) is a Y Z plane along the 250 th slice in the X-axis direction, FIG. 3 (c) is a X Z plane along the 203 th slice in the Y-axis direction, and FIG. 3 (d) is a reconstructed image of the sample;
fig. 4 shows a high dynamic optical projection tomography apparatus.
Detailed Description
The conception, specific structure, and technical effects produced by the present disclosure will be clearly and completely described below in connection with the embodiments and the drawings to fully understand the objects, aspects, and effects of the present disclosure. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
A flow chart of a high dynamic optical projection tomography method according to the present disclosure is shown in fig. 1, and a high dynamic optical projection tomography method according to an embodiment of the present disclosure is explained below in connection with fig. 1.
The disclosure provides a high dynamic optical projection tomography method, which specifically comprises the following steps:
step 1, inputting a first image sequence to obtain a second image sequence according to the arrangement of exposure time from small to large;
since the images at higher exposure have an overall gray value that is higher than the one-level exposure lower, the gray values of the images in the first image sequence are added sequentially by pixel position,
Figure BDA0001966672040000051
i= … N, j= … M, where N is the total number of images in the first image sequence and M is the total number of pixels of the image; in the formula g i Is the sum of pixel gray values of the ith image, f ij A gray value of a j-th pixel of the i-th image;
pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299R+0.587G+0.114B;
From big to small will g i Sequencing; and according to g i The images are arranged, i.e. a second image sequence is obtained with an arrangement of the exposure times from small to large.
Step 2, iteratively segmenting the second image sequence to obtain a foreground image sequence and a background image sequence of the second image sequence;
step 2.1, obtaining the maximum gray value G of each image in the second image sequence MAX And a minimum gray value G MIN ,TR k For the segmentation threshold, k= … M-1, M is the total number of pixels of the image, let the initial threshold
Figure BDA0001966672040000061
Step 2.2, the segmentation threshold TR is greater than or equal to k Is segmented into foreground images, will be less than the segmentation threshold TR k Is divided into background images, and the average gray value G of the foreground images is obtained F And average gray value G of background image B
Step 2.3, obtaining a new threshold
Figure BDA0001966672040000062
Step 2.4, if T k =T k+1 Turning to step 2.5, otherwise turning to step 2.2;
step 2.5, outputting a foreground image sequence and a background image sequence;
the foreground image sequence and the background image sequence obtained by iterative segmentation have good image effects, and the main areas of the foreground and the background of the image can be distinguished based on the iterative threshold value.
Preferably, in step 2, the step of iteratively segmenting the second image sequence to obtain the foreground image sequence and the background image sequence of the second image sequence may be replaced by: the sample is removed and the background image without the sample is directly shot by the CMOS camera and converted into the background image with gray scale, namely the background image sequence is obtained by taking the background image by the CMOS camera and converting the background image into the background image with gray scale, and the gray scale image is converted into the gray scale value f of the pixel in the background image ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299R+0.587G+0.114B。
Step 3, calculating the gray value variance of the background image sequence;
in the OPT theory, an omnidirectional acquisition image of a sample at 360 ° under the same exposure gradient is required to be obtained, and a filtered back projection is performed on the image to obtain a slice diagram of the sample. In a collected image of 360 degrees, a sample image moves within a certain pixel range, namely, the gray value of the collected image changes in the part of pixels; the pixel areas with unchanged corresponding gray values can be used for calculating background difference values, the pixel areas with different exposure gradients at the same angle are different, namely a background image sequence, and the gray value variance of the background image sequence is calculated, wherein the exposure gradients are image sequences acquired at different exposure times, and the exposure time intervals of all images in the image sequence are consistent.
Let Z denote the gray value set of all M pixel points in the background image, Z= (x, y), Z E Z, and M represents the average value of the gray values of the pixels, then the average value
Figure BDA0001966672040000071
Gray value variance of background image sequence +.>
Figure BDA0001966672040000072
i= … N, N being the total number of images in the background image sequence.
Step 4, when the gray value variance is larger than the pixel threshold, then the second image sequence and the corresponding background image are processedThe sequence is subjected to frequency domain difference processing to obtain a third image sequence, and the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)-m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i= … N, j= … M, N being the total number of images in the first image sequence, M being the total number of pixels of the image, M i (x, y) is a background image obtained after fourier transform processing.
And 5, when the gray value variance is smaller than or equal to the pixel threshold value, performing frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, wherein the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)+m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i= … N, j= … M, N being the total number of images in the first image sequence, M being the total number of pixels of the image, M i (x, y) is a background image obtained after fourier transform processing.
And 6, performing inverse Fourier transform on the third image sequence to obtain a high-dynamic imaging image.
Further, in step 1, the first image sequence is a sequence in which a plurality of collected images of samples are obtained by shooting at different exposure times of 0.38×2n ms (milliseconds) in front of the samples through a CMOS camera, n=1, 2 …,255, the CMOS camera is any one of a UCMOS10000KPA, a UCMOS08000KPA, and a U3CMOS series C interface USB3.0CMOS camera, the image sequences are obtained by arranging the exposure times from small to large, and the collected images are sequentially sequenced into a sequence of one image according to the exposure time when the CMOS camera shoots, that is, the first image sequence, wherein the samples are tissues of the whole small animals, parts of the animals, and tissues of the plants, and the front of the samples is the direction in which the CMOS camera faces the samples.
Further, in step 4 and step 5, the pixel threshold is an integer, the value range is 150-255, and the default value is 200.
One embodiment of the present disclosure is: collecting sample images, wherein the sample is a 'fish', the CMOS camera shoots the sample right in front of the sample at different exposure times by the CMOS camera to obtain collected images of a plurality of samples, the collected images of the sample right in front of the sample at different exposure times by the CMOS camera are arranged according to the exposure times from small to large, and background removal is carried out on a two-dimensional Fourier plane after two-dimensional Fourier transformation to obtain a group of two-dimensional Fourier plane frequency domain diagrams without the background; adding the difference value between the frequency domain map of the second exposure gradient and the frequency domain map of the first exposure gradient to obtain a two-dimensional Fourier plane frequency domain processing map; adding the frequency domain background removal map with higher exposure gradient to the difference value of all the operation frequency domain treatment maps with the first-stage exposure gradient to obtain a secondary frequency domain treatment map; repeating the above operation until the exposure gradient reaches the highest gradient, then obtaining a spatial domain high dynamic image under the angle by two-dimensional inverse Fourier, repeating 360 DEG total 200 faces to obtain a sample full-angle high dynamic acquisition composite image, performing back projection transformation on the composite image, and reconstructing to obtain a sample three-dimensional image.
The exposure gradient is an image sequence acquired by different exposure time, the exposure time intervals of the images are consistent, and the direction of the CMOS camera right in front of the sample is the direction of the CMOS camera right against the sample.
As shown in fig. 2, fig. 2 (a) is a graph of the result of the frequency domain highly dynamic processing with an exposure time of 0.06ms to 1.5ms, and fig. 2 (b) is an acquired image with an exposure time of 0.38 ms. Fig. 2 (c) shows the pixel value distribution diagram of line a in the diagrams (a) and (B), and fig. 2 (d) shows the pixel value of line B in the diagrams (a) and (B), wherein line a is the pixel of 398 th row of the acquired image, and line B is the pixel of 600 th row of the acquired image.
As can be seen from fig. 2, the exposure time is 0.38ms to obtain most of the structural information of the sample, but for the fish fins with higher light transmittance, the information is lost due to the too high exposure time, while for the fish bellies with lower light transmittance, the information with comprehensive and high contrast is difficult to obtain due to the too low exposure time. Fig. 2 (c) shows that the pixel value of the part of the pixel segment of the acquired image with the exposure time of 0.38ms in the fish fin part reaches 255 (the maximum pixel value of the experimental data), the corresponding information change cannot be obtained, and the information is lost. (c) And (d) showing that there is a greater fall between peaks and troughs of the frequency domain high dynamic processed image in normal or lower transmittance portions, such as the abdomen and head of a fish, with more contrast detail information. By contrast, the method overcomes the limitation of the dynamic range of the photosensitive element of the camera, and can obtain complete optical information with high contrast.
As shown in fig. 3, fig. 3 (a) shows the 450 th slice layer along the Z-axis direction on the plane X Y, fig. 3 (b) shows the 250 th slice layer along the X-axis direction on the plane Y Z, and fig. 3 (c) shows the 203 th slice layer along the Y-axis on the plane X Z. Fig. 3 (d) is a reconstructed image of the sample. As shown in fig. 3 (c), the method can obtain clearer organ slice information, and can observe the spine structure of a sample and the distribution of tiny bones around the spine from the graph.
A high dynamic optical projection tomography apparatus provided by an embodiment of the present disclosure, as shown in fig. 4, is a high dynamic optical projection tomography apparatus diagram of the present disclosure, and the high dynamic optical projection tomography apparatus of the embodiment includes: a processor, a memory, and a computer program stored in the memory and executable on the processor, which when executed, performs the steps of one of the embodiments of a high dynamic optical projection tomography apparatus described above.
The device comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in units of:
the image sequencing unit is used for inputting the first image sequence to obtain a second image sequence according to the arrangement of the exposure time from small to large;
the background segmentation unit is used for iteratively segmenting the second image sequence to obtain a foreground image sequence and a background image sequence of the second image sequence;
a gray variance calculating unit for calculating gray value variance of the background image sequence;
the frequency domain difference processing unit is used for carrying out frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence when the gray value variance is larger than the pixel threshold value;
the frequency domain superposition processing unit is used for carrying out frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence when the gray value variance is smaller than or equal to the pixel threshold value;
and the high dynamic imaging unit is used for carrying out inverse Fourier transform on the third image sequence to obtain a high dynamic imaging image.
The high-dynamic optical projection tomography device can be operated in computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The high dynamic optical projection tomography apparatus may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the example is merely an example of a high dynamic optical projection tomography apparatus, and is not limiting of a high dynamic optical projection tomography apparatus, and may include more or fewer components than the example, or may combine certain components, or different components, e.g., the high dynamic optical projection tomography apparatus may further include input and output devices, network access devices, buses, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the operation device of the one high dynamic optical projection tomography apparatus, and connects the respective parts of the entire one high dynamic optical projection tomography apparatus using various interfaces and lines.
The memory may be used to store the computer program and/or the module, and the processor may implement various functions of the high dynamic optical projection tomography apparatus by running or executing the computer program and/or the module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
While the present disclosure has been described in considerable detail and with particularity with respect to several described embodiments, it is not intended to be limited to any such detail or embodiments or any particular embodiment, but is to be construed as providing broad interpretation of such claims by reference to the appended claims in view of the prior art so as to effectively encompass the intended scope of the disclosure. Furthermore, the foregoing description of the present disclosure has been presented in terms of embodiments foreseen by the inventor for the purpose of providing a enabling description for enabling the enabling description to be available, notwithstanding that insubstantial changes in the disclosure, not presently foreseen, may nonetheless represent equivalents thereto.

Claims (8)

1. A method of high dynamic optical projection tomography, the method comprising the steps of:
step 1, inputting a first image sequence to obtain a second image sequence according to the arrangement of exposure time from small to large;
step 2, iteratively segmenting the second image sequence to obtain a foreground image sequence and a background image sequence of the second image sequence;
step 3, calculating the gray value variance of the background image sequence;
step 4, when the gray value variance is larger than the pixel threshold, performing frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence;
step 5, when the gray value variance is smaller than or equal to the pixel threshold, performing frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence;
step 6, performing inverse Fourier transform on the third image sequence to obtain a high-dynamic imaging image;
in step 4, when the gray value variance is greater than the pixel threshold, performing frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, where the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)-m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i=1l n, j=1l M, n being the total number of images in the first image sequence, M being the graphTotal number of pixels of the image, m i (x, y) is a background image obtained after fourier transform processing;
in step 5, when the gray value variance is smaller than or equal to the pixel threshold, performing frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, where the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)+m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i=1l n, j=1l M, n being the total number of images in the first image sequence, M being the total number of pixels of an image, M i (x, y) is a background image obtained after fourier transform processing.
2. The method according to claim 1, wherein in step 1, the first image sequence is a sequence in which a plurality of sample images obtained by photographing with different exposure times of the CMOS camera are arranged from small to large in exposure time immediately in front of the sample, and the acquired images are sequentially ordered into one image sequence according to the exposure time of the CMOS camera.
3. The method of claim 1, wherein in step 1, the method of obtaining the second image sequence by arranging the first image sequence from small to large according to the exposure time is that gray values of the images in the first image sequence are added sequentially according to the pixel positions,
Figure FDA0004198768540000021
wherein N is the total number of images in the first image sequenceThe amount, M, is the total number of pixels of the image; in the formula g i Is the sum of pixel gray values of the ith image, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; from big to small will g i Sequencing; and according to g i The images are arranged, i.e. a second image sequence is obtained with an arrangement of the exposure times from small to large.
4. The method of claim 1, wherein in step 2, the method of iteratively segmenting the second image sequence into a foreground image sequence and a background image sequence of the second image sequence comprises the steps of:
step 2.1, obtaining the maximum gray value G of each image in the second image sequence MAX And a minimum gray value G MIN ,TR k For the segmentation threshold, k=0L M-1, m is the total number of pixels of the image, let the initial threshold
Figure FDA0004198768540000022
Step 2.2, the segmentation threshold TR is greater than or equal to k Is segmented into foreground images, will be less than the segmentation threshold TR k Is divided into background images, and the average gray value G of the foreground images is obtained F And average gray value G of background image B
Step 2.3, obtaining a new threshold
Figure FDA0004198768540000023
Step 2.4, if T k =T k+1 Turning to step 2.5, otherwise turning to step 2.2;
and 2.5, outputting a foreground image sequence and a background image sequence.
5. The method of claim 1, wherein in step 2, the step of iteratively segmenting the second image sequence to obtain the foreground image sequence and the background image sequence of the second image sequence is further replaced by: the sample is withdrawn and the background image without the sample is directly shot by the CMOS camera and converted into the background image with gray scale, namely the background image sequence is acquired by taking the background image by the CMOS camera and converting into the background image with gray scale.
6. The method of claim 1, wherein in step 3, the method of calculating the gray value variance of the background image sequence is: let Z denote the gray value set of all M pixel points in the background image, Z= (x, y), Z E Z, and M represents the average value of the gray values of the pixels, then the average value
Figure FDA0004198768540000031
Gray value variance of background image sequence +.>
Figure FDA0004198768540000032
N is the total number of images in the sequence of background images.
7. The method of claim 1, wherein in step 4 and step 5, the pixel threshold is an integer and the range of values is 150-255.
8. A high dynamic optical projection tomography apparatus, the apparatus comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in units of:
the image sequencing unit is used for inputting the first image sequence to obtain a second image sequence according to the arrangement of the exposure time from small to large;
the background segmentation unit is used for iteratively segmenting the second image sequence to obtain a foreground image sequence and a background image sequence of the second image sequence;
a gray variance calculating unit for calculating gray value variance of the background image sequence;
the frequency domain difference processing unit is used for carrying out frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence when the gray value variance is larger than the pixel threshold value;
the frequency domain superposition processing unit is used for carrying out frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence when the gray value variance is smaller than or equal to the pixel threshold value;
the high dynamic imaging unit is used for carrying out inverse Fourier transform on the third image sequence to obtain a high dynamic imaging image;
when the gray value variance is larger than the pixel threshold, performing frequency domain difference processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, wherein the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)-m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) frequency domain image obtained by performing two-dimensional Fourier processing, g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i=1l n, j=1l M, n being the total number of images in the first image sequence, M being the total number of pixels of an image, M i (x, y) is a background image obtained after fourier transform processing;
when the gray value variance is smaller than or equal to the pixel threshold value, performing frequency domain superposition processing on the second image sequence and the corresponding background image sequence to obtain a third image sequence, wherein the expression is as follows: h i (x,y)=h i (x,y)+[h i (x,y)+m i (x,y)]H in the above i (x, y) is the third image obtained, h i (x, y) is a gray scale image g of the second image i (x, y) a frequency domain image obtained after two-dimensional Fourier processing,g i (x, y) is f ij Combining the gray-scale patterns, f ij A gray value of a j-th pixel of the i-th image; pixel gray value f ij The RGB color conversion relationship with the pixel is as follows: f (f) ij =0.299r+0.587g+0.114 b; i=1l n, j=1l M, n being the total number of images in the first image sequence, M being the total number of pixels of an image, M i (x, y) is a background image obtained after fourier transform processing.
CN201910105623.8A 2019-02-01 2019-02-01 High dynamic optical projection tomography method and device Active CN109949383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910105623.8A CN109949383B (en) 2019-02-01 2019-02-01 High dynamic optical projection tomography method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910105623.8A CN109949383B (en) 2019-02-01 2019-02-01 High dynamic optical projection tomography method and device

Publications (2)

Publication Number Publication Date
CN109949383A CN109949383A (en) 2019-06-28
CN109949383B true CN109949383B (en) 2023-07-11

Family

ID=67007615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910105623.8A Active CN109949383B (en) 2019-02-01 2019-02-01 High dynamic optical projection tomography method and device

Country Status (1)

Country Link
CN (1) CN109949383B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113295650B (en) * 2021-05-28 2022-05-20 北京理工大学 Hydrogen three-dimensional concentration testing device and testing method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109124615A (en) * 2018-09-06 2019-01-04 佛山科学技术学院 One kind can constituency high dynamic laser speckle blood current imaging device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805139B (en) * 2018-05-07 2022-02-18 南京理工大学 Image similarity calculation method based on frequency domain visual saliency analysis

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109124615A (en) * 2018-09-06 2019-01-04 佛山科学技术学院 One kind can constituency high dynamic laser speckle blood current imaging device and method

Also Published As

Publication number Publication date
CN109949383A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
Bavirisetti et al. Multi-scale guided image and video fusion: A fast and efficient approach
Krull et al. Probabilistic noise2void: Unsupervised content-aware denoising
Chang et al. Two-stage convolutional neural network for medical noise removal via image decomposition
Bai et al. Single-image blind deblurring using multi-scale latent structure prior
Romano et al. The little engine that could: Regularization by denoising (RED)
Lee et al. Mu-net: Multi-scale U-net for two-photon microscopy image denoising and restoration
CN109416727B (en) Method and device for removing glasses in face image
Kenig et al. Blind image deconvolution using machine learning for three-dimensional microscopy
Li et al. Fusing images with different focuses using support vector machines
CN111340077B (en) Attention mechanism-based disparity map acquisition method and device
CN112446380A (en) Image processing method and device
Ruan et al. Aifnet: All-in-focus image restoration network using a light field-based dataset
Liu et al. A new polarization image demosaicking algorithm by exploiting inter-channel correlations with guided filtering
CN111862251B (en) Method, device, storage medium and electronic equipment for medical image reconstruction technology
Wang et al. Joint iterative color correction and dehazing for underwater image enhancement
CN105590304B (en) Super-resolution image reconstruction method and device
Shajkofci et al. Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks
An et al. EIEN: endoscopic image enhancement network based on retinex theory
Wang et al. An ensemble multi-scale residual attention network (EMRA-net) for image Dehazing
Dinh A novel approach using the local energy function and its variations for medical image fusion
Pang et al. Progressive polarization based reflection removal via realistic training data generation
CN109949383B (en) High dynamic optical projection tomography method and device
Yang et al. LatLRR-CNN: An infrared and visible image fusion method combining latent low-rank representation and CNN
Hidane et al. Image zoom completion
Badretale et al. Fully convolutional architecture for low-dose CT image noise reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 528000 Foshan Institute of science and technology, Xianxi reservoir West Road, Shishan town, Nanhai District, Foshan City, Guangdong Province

Patentee after: Foshan University

Country or region after: China

Address before: 528000 Foshan Institute of science and technology, Xianxi reservoir West Road, Shishan town, Nanhai District, Foshan City, Guangdong Province

Patentee before: FOSHAN University

Country or region before: China