CN111986078A - Multi-scale core CT image fusion reconstruction method based on guide data - Google Patents

Multi-scale core CT image fusion reconstruction method based on guide data Download PDF

Info

Publication number
CN111986078A
CN111986078A CN201910425975.1A CN201910425975A CN111986078A CN 111986078 A CN111986078 A CN 111986078A CN 201910425975 A CN201910425975 A CN 201910425975A CN 111986078 A CN111986078 A CN 111986078A
Authority
CN
China
Prior art keywords
image
low
resolution
mode
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910425975.1A
Other languages
Chinese (zh)
Other versions
CN111986078B (en
Inventor
何小海
李征骥
滕奇志
卿粼波
任超
王正勇
吴晓红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201910425975.1A priority Critical patent/CN111986078B/en
Publication of CN111986078A publication Critical patent/CN111986078A/en
Application granted granted Critical
Publication of CN111986078B publication Critical patent/CN111986078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a multi-scale core CT image fusion reconstruction method based on guide data. The method comprises the following steps: (1) performing degradation on the high-quality image by adopting a plurality of degradation methods, and selecting the most appropriate degradation mode according to a blind reference image quality evaluation method; (2) constructing a multi-level mapping dictionary according to the high-quality image and the degraded image; (3) traversing the low-quality image by taking the low-resolution image as guide data to extract a mode of the low-resolution image, and searching the most similar mode in a multi-level mapping dictionary; (4) and fusing the searched mode with the mode extracted from the low-quality image. For a plurality of core CT images under different scales, the method can reconstruct a three-dimensional core CT image with high quality and large visual field, and has good application value.

Description

Multi-scale core CT image fusion reconstruction method based on guide data
Technical Field
The invention relates to a fusion reconstruction method, in particular to multi-scale core CT image fusion reconstruction based on guide data. Belonging to the technical field of core image fusion reconstruction.
Background
CT is the mainstream imaging equipment for obtaining three-dimensional core images, but due to the limitations of CT equipment, the imaging quality is affected by the size of the sample. When the size of a scanned sample is large, the acquired image has good global representativeness and can better reflect the information of the sample on a macroscopic scale; however, the larger the size of the corresponding point of the image is, the smaller the resolution of the sample is, the smaller the size of the scanned sample is, the more seriously the image is affected by noise, and the quality of the image is also reduced. When the size of the sample is small, the quality of the acquired image is improved, small pores which cannot be acquired due to insufficient resolution in a large sample can be clearly presented in the small sample, information of the sample on a micro scale can be better reflected, and the macro representativeness of the sample is insufficient.
By using the fusion reconstruction method, the core images under different scales can be subjected to fusion reconstruction to obtain a large-view-field high-quality core sample image. The method of fusion reconstruction can make up for the deficiency in CT imaging to some extent, and therefore, is getting more and more attention from researchers.
Disclosure of Invention
The invention aims to reconstruct a high-vision high-quality rock core image according to rock core CT images under different scales and provide more accurate data for later lithology analysis.
The aim of the invention can be realized by a core CT image fusion reconstruction method under multiple same scales, and the technical scheme mainly comprises the following steps:
(1) degrading a high-quality high-resolution image (high-resolution image for short) by adopting a plurality of degradation methods, and selecting a degraded image which is most similar to a low-quality low-resolution image (low-resolution image for short) according to a blind reference image quality evaluation method;
(2) constructing a multi-level mapping dictionary according to the high-resolution image and the degraded image;
(3) traversing the low-quality image by taking the low-resolution image as guide data to extract a mode of the low-resolution image, and searching the most similar mode in a multi-level mapping dictionary;
(4) fusing the searched mode with the mode extracted from the low-quality image;
the basic principle of the method is as follows:
for the same core sample, the high-resolution image is a clear image of a certain local part of the low-resolution image, and the low-resolution image can be regarded as a degraded and noised image of the corresponding high-resolution image.
From the above model, it is possible to envisage: if the mapping relation between the high-resolution image and the low-resolution image can be found, a reconstruction method can be designed by utilizing the relation, the low-resolution core image is used as guide data, modes appearing in the low-resolution core image and the high-resolution image are fused, and high-resolution and low-resolution fusion reconstruction is realized. In order to predict the mapping relationship between high and low resolutions, it is usually necessary to accurately register the high and low resolution images, but in the CT imaging process, the accurate positions of the high resolution image and the low resolution image are difficult to determine, so that it is difficult to perform registration.
Different from the super-resolution reconstruction algorithm of the core image, in the super-resolution reconstruction of the core image, the target to be reconstructed is mainly high-frequency part information. In the reconstruction step, the low-frequency information in the low-resolution image is superposed with the predicted high-frequency information, so that a reconstructed high-resolution image is obtained. The reconstructed image will retain the same low frequency information as the low resolution image, i.e. the reconstructed core image is morphologically the same as the low resolution image in terms of grains and pores.
In the core image fusion reconstruction, besides the high-frequency information lost in the low-resolution image, what is more important is to restore the hole information which cannot be distinguished in the low-resolution image according to the information in the high-resolution image. Therefore, during the reconstruction process, it is necessary to use the low-frequency information in the low-resolution image as the guide data and perform a certain correction to restore the small-pore feature information in the high resolution image to the low-resolution image, that is, to perform a certain correction to the morphology of the particles and pores in the low-resolution image, rather than to add the high-frequency information of the image.
As mentioned before, to implement the fused reconstruction, the degradation relationship needs to be estimated. And to correctly estimate the degradation relationship, it is a key to correctly determine the quality similarity between the degraded image and the low-resolution image.
In super-resolution reconstruction algorithms, the degradation model is typically estimated using different resolution images at the same field of view. However, in the CT imaging process, the accurate positions of the high resolution image and the low resolution image are difficult to determine, so that it is difficult to achieve the registration of the high resolution image and the low resolution image. In the core image super-resolution reconstruction algorithm, a low-resolution image is also simulated by degrading a high-resolution image.
Aiming at the problem of estimating the degradation relation, in the method, a plurality of degradation modes are adopted to degrade a high-resolution Image to obtain a plurality of degradation images, and then a Blind reference Image Quality evaluation method provided by Zhang (Zhang L, Zhang L, Bovik AC. A Feature-enhanced complete blank Image Quality evaluation [ J ]. Ieee Transactions on Image Processing,2015,24(8):2579 + 2591.) is used to evaluate the high-resolution Image and the degradation Image from the aspects of average brightness, average contrast, average gradient and frequency information so as to select the most suitable degradation mode.
In the invention, a multi-scale core CT image fusion reconstruction algorithm is provided by combining a super-resolution thought and a theory of a multi-point statistical reconstruction algorithm. In the fusion reconstruction algorithm, first, the degradation relationship between the high-resolution image and the low-resolution image is predicted. Then, a high-resolution image mapping relation and a low-resolution image mapping relation are established according to the degradation relation, and the corresponding relation of low-resolution mode and high-resolution mode replaces the corresponding relation of low-frequency information and high-frequency information in super-resolution reconstruction. In the reconstruction process, a mode matching mode in a multipoint statistic reconstruction algorithm is adopted for matching reconstruction.
Specifically, in the step (1), a plurality of different degradation methods are used for degrading the high-resolution core image, the quality of the degraded image and the quality of the low-resolution image are respectively calculated by using a blind reference image evaluation method, and the degraded image most similar to the low-resolution image is selected;
in the step (2), the high-resolution rock core image and the degraded rock core image are subjected to mode extraction by respectively adopting templates of multi-level grids, local binary patterns of the modes obtained from the degraded images are extracted, and all corresponding local binary patterns, degraded image patterns and high-resolution image patterns are stacked to construct a multi-level mapping dictionary;
In the step (3), pattern information is extracted from the low-resolution image, a local binary pattern of the extracted pattern is calculated, a rough search is firstly performed in a dictionary according to the local binary pattern to find out a plurality of most similar degradation patterns, and during the rough search, the rotation matching of the pattern is realized by shifting the numerical value in the local binary pattern; and then, searching out a most similar mode block from the result of the first coarse search by adopting an accurate search method, wherein the specific calculation method comprises the following steps: setting L as a data mode to be reconstructed extracted from the low-resolution image, wherein the size of the data mode is (p × q), and searching in a mode dictionary to obtain a most similar low-resolution mode L 'and a corresponding high-resolution mode H'; wherein L and L' have the same size (p × q); since the image for generating the low resolution mode in the mode dictionary is obtained by multiplying the high resolution image by r, the required size of the high resolution mode H' is (u × v), where u is p × r and v is q × r; in order to fuse the mode L extracted from the low-resolution image with the high-resolution mode H 'obtained by searching, the method adopts a super-resolution reconstruction algorithm to up-sample L by r times to obtain an amplified mode L', and the mode L 'and the high-resolution mode H' have the same size (u multiplied by v); in the process of searching for matching, the similarity calculation method of the patterns L and L' is shown in formula 1:
Figure BDA0002067503400000031
Wherein SSIM is a structural similarity calculation method (Wang Zhou, Bovik A.C, Sheikh, H.R.Simocell, E.P.image quality assessment: from R visibility to structural similarity [ J ]. IEEE Transactions on Image Processing,2004,13(4): 600-612), D is an Image bit depth, and alpha is a weight; in a three-dimensional image, similarly:
Figure BDA0002067503400000032
wherein SSIM3D is a three-dimensional Image structure similarity calculation method (K.Zeng and Z.Wang,3D-SSIM for video quality assessment [ C ], 201219 th IEEE International Conference on Image Processing,2012:621-624), and p, q, and m are the length, width, and height of the three-dimensional Image L, respectively.
In the step (4), the final value of each pixel of the region to be reconstructed is determined by the pixel value of the corresponding position in the original low-resolution image mode amplified by the super-resolution method and the pixel value of the corresponding position in the most similar mode in the dictionary; in the reconstruction process, in order to eliminate the blocking effect as much as possible, and simultaneously keep the information of the approximate positions and sizes of pores and particles in the low-resolution image, and utilize the information of textures, modes and the like provided by a high-resolution image mode as much as possible, an adaptive weighted reconstruction method is adopted; after finding the most similar mode, the mode after the most similar mode is amplified needs to be fused, and the final reconstruction result P can be solved by the following formula:
Figure BDA0002067503400000041
And obtaining the fused core image after traversing the whole low-resolution image.
The invention has the beneficial effects that:
the method adopts a multi-scale core CT image fusion reconstruction method based on the guide data, and fuses the modes appearing in the low-resolution core image and the high-resolution core image by using the low-resolution core image as the guide data.
Drawings
FIG. 1 is a schematic block diagram of a multi-scale core CT image fusion reconstruction method based on guide data;
FIG. 2 shows a high-quality high-resolution image and a low-quality low-resolution image of the same core at different scales;
FIG. 3 is a schematic representation of the reconstruction results;
FIG. 4 shows an original low-resolution image and a binarized image of the reconstruction result;
FIG. 5 is a two-point correlation function, a linear path function, a pore size distribution, and a local porosity contrast of an original low resolution image and a reconstructed result;
FIG. 6 is a three-dimensional low resolution image, a three-dimensional high resolution image, a reconstruction result, and a reconstruction result slice;
FIG. 7 is a two-point correlation function, linear path function, pore size distribution, and local porosity contrast for an original low resolution image and a reconstructed result.
Detailed Description
The invention will be further illustrated with reference to the following specific examples and the accompanying drawings:
example (b):
in order to make the method of the present invention more easily understood and closer to practical applications, the following is a detailed and integrated description of the whole process, which includes the core method of the present invention:
(1) a group of three-dimensional CT core images with large visual field and low resolution and a three-dimensional CT core image with small visual field and high resolution obtained by scanning after drilling a small sample from the same core are given, and the two images are fused to generate a core image with large visual field and high resolution. Fig. 6 shows a three-dimensional CT core image with a large visual field and a low resolution, a three-dimensional CT core image with a small visual field and a high resolution, and an image after fusion reconstruction.
(2) Calculating the image quality of the large-view-range low-resolution three-dimensional CT core image, then degrading the small-view-range high-resolution three-dimensional CT core image by adopting various degradation methods, calculating the image quality of the degraded image, and selecting one image with the image quality most similar to that of the large-view-range low-resolution three-dimensional CT core image to construct a multi-level mapping dictionary.
(3) And traversing the image quality of the large-view-field low-resolution three-dimensional CT core image, extracting the mode of the large-view-field low-resolution three-dimensional CT core image, and searching the most similar mode in a multi-level mapping dictionary.
(4) And fusing the searched mode with the mode extracted from the large-view-field low-resolution three-dimensional CT core image to complete the fusion reconstruction work.
After reconstruction is completed, the reconstruction result is verified by respectively adopting a two-point correlation function, a linear path function, local porosity and pore size distribution.
As can be seen from the slice comparison diagram in fig. 6, the reconstruction result has a similar structural morphology to the large-view low-resolution three-dimensional CT core image, which indicates that the basic morphology of the core image is not changed by the reconstruction result, and the basic morphological characteristics of the pores and particles in the original image are well maintained.
As can be seen from the comparison of the graphs (a) to (g) in fig. 7, the reconstruction result and the original low-resolution sample have similar distributions in the two-point correlation function, the linear path and the local porosity, which indicates that the reconstruction result can better retain the distribution characteristics of the positions, sizes and connectivity characteristics of the pores and particles provided by the low-resolution image. The proportion of the pores with different sizes in the three images in the whole image is counted in the graph (h), and the volume of the original three-dimensional image with low resolution is 0-1000 mu m 3The total volume of the pores accounts for about 1% of the total core sample volume, and the volume is 2000-7000 mu m3The void fraction slowly rose to about 2%; the volume of the high-resolution original three-dimensional image is 0-1000 mu m3The total volume of the pores accounts for about 6% of the total core sample volume, and the volume is 2000-7000 mu m3The void fraction slowly drops to about 2% consistent with the original low resolution three-dimensional image,this shows that in the low-resolution three-dimensional image, small pore structures cannot be obtained due to the resolution and the image quality, and the high-resolution three-dimensional image can better represent the structure and distribution characteristics of the small pores; in two original three-dimensional images, the volume of the low-resolution image is 17000-27000 mu m3The ratio of the pores to the high-resolution image is high, which shows that the low-resolution image can more comprehensively acquire structural characteristics of a plurality of large pores due to the large sample size, so that the global representativeness of the image is stronger. In the reconstruction result, 0 to 1000 μm3The total volume of the pores accounts for about 5% of the volume of the whole core sample, is close to the proportion of the total volume of the core sample in the high-resolution original image, and is 2000-7000 mu m3The ratio of pores is slowly reduced to about 2 percent, is higher than the ratio of pores in the low-resolution original image and is close to the ratio of pores in the high-resolution original image, which shows that the fusion reconstruction algorithm can better reproduce the structure and distribution characteristics of the pores in the high-resolution image; in the reconstructed three-dimensional structure, the volume is 17000-27000 mu m 3The total volume of the pores is basically consistent with the total proportion of the pores in the part of the low-resolution image, which shows that the reconstruction result can reconstruct the structural and distribution characteristics consistent with the large pores in the low-resolution image.
The above embodiments are merely preferred embodiments of the present invention, and are not intended to limit the technical solutions of the present invention, and any technical solutions that can be implemented on the basis of the above embodiments without creative efforts should be considered to fall within the protection scope of the present invention.

Claims (5)

1. A method for multi-scale core CT image fusion reconstruction based on guide data is characterized by comprising the following steps:
(1) degrading a high-quality high-resolution image (high-resolution image for short) by adopting a plurality of degradation methods, and selecting a degraded image which is most similar to a low-quality low-resolution image (low-resolution image for short) according to a blind reference image quality evaluation method;
(2) constructing a multi-level mapping dictionary according to the high-resolution image and the degraded image;
(3) traversing the low-quality image by taking the low-resolution image as guide data to extract a mode of the low-resolution image, and searching the most similar mode in a multi-level mapping dictionary;
(4) and fusing the searched mode with the mode extracted from the low-quality image.
2. The method of claim 1, wherein: in the step (1), the high-resolution core image is degraded by using a plurality of different degradation methods, the quality of the degraded image and the quality of the low-resolution image are respectively calculated by using a blind reference image evaluation method, and the degraded image most similar to the low-resolution image is selected.
3. The method of claim 1, wherein: in the step (2), the high-resolution core image and the degraded core image are subjected to pattern extraction by respectively adopting templates of multi-level grids, local binary patterns of the patterns obtained from the degraded images are extracted, and all corresponding local binary patterns, degraded image patterns and high-resolution image patterns are stacked to construct a multi-level mapping dictionary.
4. The method of claim 1, wherein: in the step (3), pattern information is extracted from the low-resolution image, a local binary pattern of the extracted pattern is calculated, a coarse search is performed in the dictionary according to the local binary pattern to find out a plurality of most similar degradation patterns, during the coarse search, the rotation matching of the patterns is realized by shifting the numerical value in the local binary pattern, and then a most similar pattern block is found out from the result of the first coarse search by adopting an accurate search method.
5. The method of claim 1, wherein: in the step (4), the high resolution mode and the mode amplified by the low resolution mode of the most similar mode are fused and reconstructed in a weighted reconstruction mode.
CN201910425975.1A 2019-05-21 2019-05-21 Multi-scale core CT image fusion reconstruction method based on guide data Active CN111986078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910425975.1A CN111986078B (en) 2019-05-21 2019-05-21 Multi-scale core CT image fusion reconstruction method based on guide data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910425975.1A CN111986078B (en) 2019-05-21 2019-05-21 Multi-scale core CT image fusion reconstruction method based on guide data

Publications (2)

Publication Number Publication Date
CN111986078A true CN111986078A (en) 2020-11-24
CN111986078B CN111986078B (en) 2023-02-10

Family

ID=73436270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910425975.1A Active CN111986078B (en) 2019-05-21 2019-05-21 Multi-scale core CT image fusion reconstruction method based on guide data

Country Status (1)

Country Link
CN (1) CN111986078B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784433A (en) * 2021-01-31 2021-05-11 郑州轻工业大学 Hierarchical simulated annealing modeling method based on corrosion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679650A (en) * 2013-11-26 2014-03-26 四川大学 Core three-dimension image repairing method
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN105654070A (en) * 2016-02-04 2016-06-08 山东理工大学 Low-resolution face recognition method
CN106203256A (en) * 2016-06-24 2016-12-07 山东大学 A kind of low resolution face identification method based on sparse holding canonical correlation analysis
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN107169925A (en) * 2017-04-21 2017-09-15 西安电子科技大学 The method for reconstructing of stepless zooming super-resolution image
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN107818554A (en) * 2016-09-12 2018-03-20 索尼公司 Message processing device and information processing method
CN108898560A (en) * 2018-06-21 2018-11-27 四川大学 Rock core CT image super-resolution rebuilding method based on Three dimensional convolution neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679650A (en) * 2013-11-26 2014-03-26 四川大学 Core three-dimension image repairing method
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN105654070A (en) * 2016-02-04 2016-06-08 山东理工大学 Low-resolution face recognition method
CN106203256A (en) * 2016-06-24 2016-12-07 山东大学 A kind of low resolution face identification method based on sparse holding canonical correlation analysis
CN107818554A (en) * 2016-09-12 2018-03-20 索尼公司 Message processing device and information processing method
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN107169925A (en) * 2017-04-21 2017-09-15 西安电子科技大学 The method for reconstructing of stepless zooming super-resolution image
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN108898560A (en) * 2018-06-21 2018-11-27 四川大学 Rock core CT image super-resolution rebuilding method based on Three dimensional convolution neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SOTCHEADT SIM: "Electromechanical probe and automated indentation maps are sensitive techniques in assessing early degenerated human articular cartilage", 《HTTPS://ONLINELIBRARY.WILEY.COM/DOI/FULL/10.1002/JOR.23330》 *
张敬: "多源图像超分辨率重建研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *
杨丹: "岩石薄片三维重建训练图像分析", 《四川大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784433A (en) * 2021-01-31 2021-05-11 郑州轻工业大学 Hierarchical simulated annealing modeling method based on corrosion
CN112784433B (en) * 2021-01-31 2023-04-11 郑州轻工业大学 Hierarchical simulated annealing modeling method based on corrosion

Also Published As

Publication number Publication date
CN111986078B (en) 2023-02-10

Similar Documents

Publication Publication Date Title
Darwish et al. Image segmentation for the purpose of object-based classification
Zhou et al. Building detection in Digital surface model
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN104881677B (en) Method is determined for the optimum segmentation yardstick of remote sensing image ground mulching classification
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN109285222A (en) The building of organic shale high-resolution digital rock core and analysis method
CN108648256B (en) Grayscale rock core three-dimensional reconstruction method based on super-dimension
CN109242985B (en) Method for determining key parameters of pore structure from three-dimensional image
CN103106658A (en) Island or reef coastline rapid obtaining method
CN106295498A (en) Remote sensing image target area detection apparatus and method
Chen et al. An improved multi-resolution hierarchical classification method based on robust segmentation for filtering ALS point clouds
CN109977968B (en) SAR change detection method based on deep learning classification comparison
CN110598613B (en) Expressway agglomerate fog monitoring method
CN107622239B (en) Detection method for remote sensing image specified building area constrained by hierarchical local structure
Yang et al. Evaluating SAR sea ice image segmentation using edge-preserving region-based MRFs
CN108171119B (en) SAR image change detection method based on residual error network
CN112037221B (en) Multi-domain co-adaptation training method for cervical cancer TCT slice positive cell detection model
Chen et al. Single depth image super-resolution using convolutional neural networks
CN106529472B (en) Object detection method and device based on large scale high-resolution high spectrum image
Chen et al. A mathematical morphology-based multi-level filter of LiDAR data for generating DTMs
CN109886170A (en) A kind of identification of oncomelania intelligent measurement and statistical system
CN110047079A (en) A kind of optimum segmentation scale selection method based on objects similarity
Deshmukh et al. Segmentation of microscopic images: A survey
CN111986078B (en) Multi-scale core CT image fusion reconstruction method based on guide data
CN108230365A (en) SAR image change detection based on multi-source differential image content mergence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant