CN112991370B - Rock core CT crack identification and segmentation method - Google Patents

Rock core CT crack identification and segmentation method Download PDF

Info

Publication number
CN112991370B
CN112991370B CN202110378251.3A CN202110378251A CN112991370B CN 112991370 B CN112991370 B CN 112991370B CN 202110378251 A CN202110378251 A CN 202110378251A CN 112991370 B CN112991370 B CN 112991370B
Authority
CN
China
Prior art keywords
image
workpiece
matrix
crack
subgraph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110378251.3A
Other languages
Chinese (zh)
Other versions
CN112991370A (en
Inventor
邹永宁
张智斌
余浩松
李琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110378251.3A priority Critical patent/CN112991370B/en
Publication of CN112991370A publication Critical patent/CN112991370A/en
Application granted granted Critical
Publication of CN112991370B publication Critical patent/CN112991370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a rock core CT crack identification and segmentation method, and belongs to the technical field of image processing. Dividing a data set into a test set and a training set according to the proportion of 1:6, and performing self-adaptive median filtering and Hessian matrix linear filtering on all images to enhance the image quality; dividing the training set image into a plurality of small images with consistent sizes, extracting Hu invariant moment features, gray level co-occurrence matrix features and gray level mean features from the sub-images obtained by division, and training an SVM prediction model by using the obtained feature matrix; dividing the image of the test set into image blocks with the same size, extracting the same characteristics, and predicting the image blocks by using the obtained SVM model to complete the rough positioning of cracks; and finally, segmenting the image block containing the crack by using an active contour segmentation method to obtain a final crack segmentation result. The method can accurately and quickly segment crack defects in the core CT image, and has strong anti-interference performance.

Description

Rock core CT crack identification and segmentation method
Technical Field
The invention belongs to the technical field of image processing, and relates to a rock core CT crack identification and segmentation method.
Background
Computer Tomography (CT) utilizes attenuation information of X-rays passing through different substances, obtains internal density distribution of a measured object by adopting a certain reconstruction algorithm, has clear image and high resolution, is one of the world-recognized advanced nondestructive detection means, and is widely applied to the fields of aerospace, medicine, biology, industry, agriculture, electronics, archaeology and the like.
CT technology is widely used in the field of oil and gas exploration, particularly in core analysis. From the core CT scanning image, the storage space type (such as information of cracks, karst caves, dissolving holes and the like) of the reservoir can be identified, and the information of the density and the opening degree of the crack development, the distribution condition of the dissolving caves, the porosity of the core and the like can be identified. To perform various analyses of the core, it is first necessary to segment the cracks in the CT images of the core.
Crack segmentation has been the subject of much research. For example: oliveiraH et al use a dynamic threshold method and entropy to perform road image crack segmentation; landstrom A et al extracted longitudinal cracks in road surface images using morphological methods and logistic regression models; liuL et al studied CT image fracture segmentation based on wavelet transform and C-V models; liZ et al propose a Finite Plane Integral Transform (FPIT) and planelet based CT image fracture segmentation method; the core CT image crack segmentation has been studied so far, for example: yang Ruina segmenting the core CT image using an improved level set algorithm; wu Xiaoyuan et al locate the crack position using FasterR-CNN and segment the image using improved threshold segmentation; he Feng et al utilize ant colony clustering to segment cracks in core CT images.
Disclosure of Invention
In view of the above, the present invention provides a method for identifying and segmenting a core CT crack. In order to further improve the segmentation accuracy, robustness and automation degree of the core crack segmentation algorithm, the core CT image crack segmentation method based on the support vector machine and the active contour is characterized in that a gray level co-occurrence matrix, a Hu invariant moment and a gray level mean value are used as features, wherein the gray level co-occurrence matrix comprises 6 features which are respectively as follows: contrast, correlation, energy, inverse variance, mean sum, hu invariant moment contains 7 features with rotation, scaling and translation invariance. The rock core CT image crack segmentation method based on the support vector machine and the active contour has the advantages that the crack position in the CT image can be preliminarily positioned according to the characteristics of the crack without a large amount of training data, the image segmentation area is reduced, and the accuracy is high. And then obtaining a final fracture segmentation result by utilizing a movable contour segmentation method.
In order to achieve the purpose, the invention provides the following technical scheme:
a core CT crack identification and segmentation method comprises the following steps:
s1: dividing the sample image into a test sample and a training sample according to the proportion of 1:6, and performing adaptive median filtering and Hessian matrix linear filtering on all pictures to realize noise reduction and enhancement of the sample image;
s2: dividing the training set image in the S1 into 32 x 32 image blocks, classifying the image blocks into a background subgraph, a workpiece edge subgraph, a crack-free workpiece internal subgraph and a crack workpiece internal subgraph, extracting a characteristic matrix for each class of subgraph, and respectively naming the characteristic matrices as data0, data1, data2 and data3;
s3: training three SVM models by using the characteristic matrix obtained in S2: SVM1, SVM2 and SVM3;
the SVM1 is used for distinguishing an image workpiece area from a background area, wherein a positive sample is a workpiece area sub-image data1+ data2+ data3, and a negative sample is a background area sub-image data0;
the SVM2 is used for distinguishing an edge area of the workpiece from an internal area of the workpiece, a positive sample of the SVM is sub-image data2+ data3 of the internal area of the workpiece, and a negative sample of the SVM is workpiece edge sub-image data1;
the SVM3 is used for distinguishing a cracked workpiece subgraph from a non-cracked workpiece subgraph, wherein a positive sample is a cracked workpiece subgraph data2, and a negative sample is a non-cracked workpiece subgraph data3;
s4: dividing the test image into image blocks with the same size as that in S2 and extracting a feature matrix;
s5: predicting the characteristic matrix obtained in the step 4 by using the SVM1 model obtained in the step 3, and distinguishing a workpiece area and a background area;
classifying the image blocks of the non-background area into a workpiece edge subgraph and a workpiece internal subgraph by using an SVM2 model;
classifying the sub-images in the workpiece into a crack area and a crack-free area by using an SVM3 model;
s6: reserving the image blocks predicted as the workpiece subgraphs with cracks in the step S5, and setting the pixels of the rest image blocks as 0 to obtain an image P1 with a reduced crack range;
s7: in order to prevent the cracks from just appearing at the edge positions of the subgraph and to reserve the cracks of the images as much as possible, the initial sub-sampling position of the test image in the S4 is respectively translated downwards, rightwards and rightwards downwards by half window width, and the S5 and the S6 are repeated to obtain images P2, P3 and P4;
s8: adding the images P1, P2, P3 and P4 obtained in the previous step to obtain an image area containing cracks;
s9: and (5) segmenting the cracks in the S8 by using an active contour segmentation method.
Optionally, in S3, constructing the SVM classifier uses a feature matrix with 14 elements in each row as a training set, and normalization processing needs to be performed on the training set and the extracted test set, where the test set is a feature matrix of [ N × 14], where N is a total number of subgraphs of the training set, and 14 elements in each row are 14 feature parameters of the corresponding subgraph respectively.
Optionally, in S4, the number of feature parameters extracted from each sub-image is 14: feature 1 to feature 14;
feature 1 is the mean value of the gray levels of the image:
mean gray level: the average gray scale of the reaction image is the average value of all pixel values in the image;
Figure GDA0003884411540000031
wherein mean represents the average value of the image pixels, x represents the rows of the original image pixels, y represents the columns of the original image pixels, f represents the original image matrix, and N represents the number of pixel points in the image matrix;
the characteristics 2 to 7 are gray level co-occurrence matrix characteristics of the images:
contrast ratio: the definition of the image is reflected, the distribution condition of the matrix value of the image and the local change of the image are reflected, and the larger the value of the matrix value is, the stronger the texture element contrast is, the deeper the groove is, and the clearer the image is;
Figure GDA0003884411540000032
where CON represents the contrast of the image, i represents the rows of the symbiotic matrix, j represents the columns of the symbiotic matrix, p represents the gray level symbiotic matrix obtained after the gray level compression of the original image matrix, and N represents the gray level symbiotic matrix g Expressing the gray level, d expressing the absolute value of the difference between i and j, and n expressing the current iteration number;
correlation: reflecting local gray level correlation, measuring the similarity of the gray level of the image in the row or column direction, wherein the larger the value of the local gray level correlation is, the larger the correlation is;
Figure GDA0003884411540000033
where CORRLN represents the image correlation, μ i Denotes p i Mean value of (d) (. Mu.) j Denotes p j The average value of (a) of (b),
Figure GDA0003884411540000034
denotes p i The variance of (a) is calculated,
Figure GDA0003884411540000035
denotes p j The variance of (a) is determined,
Figure GDA0003884411540000036
the sum of the ith row of data of the co-occurrence matrix,
Figure GDA0003884411540000037
the sum of j column data of the symbiotic matrix;
Figure GDA0003884411540000038
Figure GDA0003884411540000039
energy: reflecting the uniformity degree and the texture thickness of the gray level distribution of the image; if the element values of the gray level co-occurrence matrix are similar, the energy is small, and the texture is fine; if some of the values are large, and others are small, the energy value is large;
Figure GDA00038844115400000310
wherein ASM is an angular second moment representing the energy of the image;
inverse variance: measuring local texture change of the image; the larger the value is, the more regular the image texture is;
Figure GDA0003884411540000041
in the formula: IDM denotes the inverse variance;
variance: reflecting the period of the texture, wherein the larger the value is, the larger the period of the texture is;
Figure GDA0003884411540000042
in the formula: m represents the mean of p (i, j);
and (3) mean sum: reflecting the light and shade depth of the image, which is the measurement of the average gray value of the pixel points in the image area;
Figure GDA0003884411540000043
wherein i + j = k
In the formula: k represents the sum of subscripts i and j;
features 8-14 are Hu invariant moment features of the image.
The invention has the beneficial effects that: the method of using the support vector machine reduces the region range of the crack, so that the algorithm reduces the noise interference in the image and improves the operation speed and the segmentation precision.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For a better understanding of the objects, aspects and advantages of the present invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a process and flow diagram according to the present invention;
FIG. 2 is a core CT image before preprocessing;
FIG. 3 is a CT image of a pretreated core;
FIG. 4 is a diagram of the effect of the prediction model SVM1 in identifying background and non-background areas;
FIG. 5 is a diagram of the effect of the prediction model SVM2 in identifying the edge and non-edge areas of the core;
FIG. 6 is a graph showing the effect of the predictive model SVM3 in identifying cracked and non-cracked regions within the sample region;
FIG. 7 is a sample mask map obtained from a single sampling;
FIG. 8 shows the result of the superposition of the mask patterns obtained by 4 sampling;
fig. 9 shows the results obtained by crack division.
Detailed Description
The following embodiments of the present invention are provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, a core CT image crack segmentation method based on a support vector machine and active contour segmentation is performed according to the following steps:
s1: and dividing the sample image into a test sample and a training sample according to the proportion of 1:6, and performing adaptive median filtering and Hessian matrix linear filtering on all pictures to realize noise reduction and enhancement of the sample image.
S2: and (2) dividing the training set image in the S1 into 32 x 32 image blocks, classifying the image blocks into a background subgraph, a workpiece edge subgraph, a crack-free workpiece internal subgraph and a crack workpiece internal subgraph, respectively storing the subgraphs in each folder in different folders, extracting feature matrixes from the subgraphs in each folder, and respectively naming the subgraphs as data0, data1, data2 and data3 and storing the subgraphs in an Excel file.
S3: and (3) training three SVM models by using the characteristic matrix obtained in the S2 (parameters c and g required by SVM training are obtained by an automatic optimization algorithm). The SVM1 is used for distinguishing an image workpiece area from a background area, wherein a positive sample is a workpiece area sub-image (data 1+ data2+ data 3), and a negative sample is a background area sub-image (data 0); the SVM2 is used for distinguishing an edge area of the workpiece from an internal area of the workpiece, a positive sample of the SVM is a workpiece internal area sub-graph (data 2+ data 3), and a negative sample of the SVM is a workpiece edge sub-graph (data 1); the SVM3 is used for distinguishing a cracked workpiece subgraph from a non-cracked workpiece subgraph, wherein a positive sample is the cracked workpiece subgraph (data 2), and a negative sample is the non-cracked workpiece subgraph (data 3).
S4: the test image is segmented into patches of the same size 32 x 32 as in S2 and the same features as the training set are extracted.
S5: predicting the characteristic matrix obtained in the step 4 by using the SVM1 model obtained in the step 3, and distinguishing a workpiece area and a background area; classifying the image blocks of the non-background area into an edge subgraph and an inner subgraph of the workpiece by using an SVM2 model; and classifying the sub-images in the workpiece into crack regions and crack-free regions by using an SVM3 model.
S6: and (5) reserving the image blocks predicted as the workpiece subgraphs with cracks in the step (S5), and setting the pixels of the rest image blocks as 0 to obtain an image P1 with a reduced crack range.
S7: in order to prevent the cracks from occurring at the sub-image edge positions just and to keep the cracks of the image as much as possible, the initial sub-sampling positions of the test image in the S4 are respectively shifted downwards, rightwards and rightwards downwards by half window width, and the S5 and the S6 are repeated to obtain images P2, P3 and P4.
S8: the images P1, P2, P3, and P4 obtained above are added to obtain an image region including a crack.
S9: and (5) segmenting the cracks in the S8 by using an active contour segmentation method.
The method of the present invention is described in detail below with reference to the accompanying drawings, and it is to be noted that the described embodiments are only intended to facilitate the understanding of the method of the present invention, and do not limit it in any way.
As shown in fig. 2, the original core CT image is noisy.
FIG. 3 is a CT image of a pretreated core;
as shown in fig. 4, 5, and 6, the model trained by the features in the method can better identify the non-background region, the image edge, and the image crack of the image.
As shown in fig. 8, the image obtained by 4 times of sampling can better retain the crack information of the original CT image for the next segmentation.
Fig. 9 shows the results obtained by crack division.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (3)

1. A rock core CT crack identification and segmentation method is characterized by comprising the following steps: the method comprises the following steps:
s1: dividing the sample image into a test sample and a training sample according to the proportion of 1:6, and performing adaptive median filtering and Hessian matrix linear filtering on all pictures to realize noise reduction and enhancement of the sample image;
s2: dividing the training set image in the S1 into 32 x 32 image blocks, classifying the image blocks into a background subgraph, a workpiece edge subgraph, a crack-free workpiece internal subgraph and a crack workpiece internal subgraph, extracting a characteristic matrix for each class of subgraph, and respectively naming the characteristic matrices as data0, data1, data2 and data3;
s3: training three SVM models by using the characteristic matrix obtained in S2: SVM1, SVM2 and SVM3;
the SVM1 is used for distinguishing an image workpiece area from a background area, wherein a positive sample is a workpiece area sub-image data1+ data2+ data3, and a negative sample is a background area sub-image data0;
the SVM2 is used for distinguishing an edge area of the workpiece from an internal area of the workpiece, a positive sample of the SVM is sub-image data2+ data3 of the internal area of the workpiece, and a negative sample of the SVM is workpiece edge sub-image data1;
the SVM3 is used for distinguishing a cracked workpiece subgraph from a non-cracked workpiece subgraph, wherein a positive sample is a cracked workpiece subgraph data2, and a negative sample is a non-cracked workpiece subgraph data3;
s4: dividing the test image into image blocks with the same size as that in S2 and extracting a feature matrix;
s5: predicting the characteristic matrix obtained in the step 4 by using the SVM1 model obtained in the step 3, and distinguishing a workpiece area and a background area;
classifying the image blocks of the non-background area into an edge subgraph and an inner subgraph of the workpiece by using an SVM2 model;
classifying the sub-images in the workpiece into a crack area and a crack-free area by using an SVM3 model;
s6: reserving the image blocks predicted as the workpiece subgraphs with cracks in the step S5, and setting the pixels of the rest image blocks as 0 to obtain an image P1 with a reduced crack range;
s7: in order to prevent the cracks from just appearing at the edge positions of the subgraph and to reserve the cracks of the images as much as possible, the initial sub-sampling position of the test image in the S4 is respectively translated downwards, rightwards and rightwards downwards by half window width, and the S5 and the S6 are repeated to obtain images P2, P3 and P4;
s8: adding the images P1, P2, P3 and P4 obtained in the previous step to obtain an image area containing cracks;
s9: and (5) segmenting the cracks in the S8 by using an active contour segmentation method.
2. The core CT crack identification and segmentation method as recited in claim 1, wherein the core CT crack identification and segmentation method comprises the following steps: in the step S3, constructing the SVM classifier, using a feature matrix with 14 elements in each row as a training set, and performing normalization processing on the training set and the extracted test set, where the test set is a feature matrix of [ N × 14], where N is a total number of subgraphs of the training set, and 14 elements in each row are 14 feature parameters of the corresponding subgraphs, respectively.
3. The core CT crack identification and segmentation method as recited in claim 2, wherein: in S4, the characteristic parameters extracted from each sub-image are 14: feature 1 to feature 14;
feature 1 is the mean value of the gray levels of the image:
mean gray level: the average gray scale of the reaction image is the average value of all pixel values in the image;
Figure FDA0003884411530000021
wherein mean represents the average value of the image pixels, x represents the rows of the original image pixels, y represents the columns of the original image pixels, f represents the original image matrix, and N represents the number of pixel points in the image matrix;
the characteristics 2 to 7 are gray level co-occurrence matrix characteristics of the images:
contrast ratio: the definition of the image is reflected, the distribution condition of the matrix value of the image and the local change of the image are reflected, and the larger the value of the matrix value is, the stronger the texture element contrast is, the deeper the groove is, and the clearer the image is;
Figure FDA0003884411530000022
where CON represents the contrast of the image, i represents the row of the co-occurrence matrix, j represents the column of the co-occurrence matrix, p represents the gray level co-occurrence matrix obtained after the gray level compression of the original image matrix, and N g Expressing the gray level, d expressing the absolute value of the difference between i and j, and n expressing the current iteration number;
correlation: reflecting local gray level correlation, measuring the similarity of the gray level of the image in the row or column direction, wherein the larger the value of the local gray level correlation is, the larger the correlation is;
Figure FDA0003884411530000023
where CORRLN represents the image correlation, μ i Represents p i Mean value of (a), mu j Denotes p j The average value of (a) of (b),
Figure FDA0003884411530000024
represents p i The variance of (a) is determined,
Figure FDA0003884411530000025
represents p j The variance of (a) is calculated,
Figure FDA0003884411530000026
the sum of the ith row of data for the co-occurrence matrix,
Figure FDA0003884411530000027
the sum of j column data of the symbiotic matrix;
Figure FDA0003884411530000028
Figure FDA0003884411530000029
energy: reflecting the uniformity degree of the image gray level distribution and the texture thickness; if the element values of the gray level co-occurrence matrix are similar, the energy is small, and the texture is fine; if some of the values are large, and others are small, the energy value is large;
Figure FDA0003884411530000031
wherein ASM is an angular second moment representing the energy of the image;
inverse variance: measuring local texture change of the image; the larger the value is, the more regular the image texture is;
Figure FDA0003884411530000032
in the formula: IDM denotes the inverse variance;
variance: reflecting the period of the texture, wherein the larger the value is, the larger the period of the texture is;
Figure FDA0003884411530000033
in the formula: m represents the mean of p (i, j);
and (3) mean sum: reflecting the light and shade depth of the image, which is the measurement of the average gray value of the pixel points in the image area;
Figure FDA0003884411530000034
wherein i + j = k
In the formula: k represents the sum of subscripts i and j;
features 8-14 are Hu invariant moment features of the image.
CN202110378251.3A 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method Active CN112991370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110378251.3A CN112991370B (en) 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110378251.3A CN112991370B (en) 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method

Publications (2)

Publication Number Publication Date
CN112991370A CN112991370A (en) 2021-06-18
CN112991370B true CN112991370B (en) 2022-11-25

Family

ID=76339500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110378251.3A Active CN112991370B (en) 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method

Country Status (1)

Country Link
CN (1) CN112991370B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463324B (en) * 2022-02-23 2023-05-05 中国石油大学(华东) Core image crack identification method based on hessian matrix filtering
CN117152373B (en) * 2023-11-01 2024-02-02 中国石油大学(华东) Core-level pore network model construction method considering cracks

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133630A (en) * 2016-02-29 2017-09-05 中国石油化工股份有限公司 A kind of method that carbonate porosity type is judged based on scan image
CN108364278A (en) * 2017-12-21 2018-08-03 中国石油大学(北京) A kind of rock core crack extract method and system
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN110516733A (en) * 2019-08-23 2019-11-29 西南石油大学 A kind of Recognition of Weil Logging Lithology method based on the more twin support vector machines of classification of improvement
CN112102229A (en) * 2020-07-23 2020-12-18 西安交通大学 Intelligent industrial CT detection defect identification method based on deep learning
CN112116609A (en) * 2019-06-21 2020-12-22 斯特拉克斯私人有限公司 Machine learning classification method and system based on structure or material segmentation in image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582387A (en) * 2020-05-11 2020-08-25 吉林大学 Rock spectral feature fusion classification method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133630A (en) * 2016-02-29 2017-09-05 中国石油化工股份有限公司 A kind of method that carbonate porosity type is judged based on scan image
CN108364278A (en) * 2017-12-21 2018-08-03 中国石油大学(北京) A kind of rock core crack extract method and system
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN112116609A (en) * 2019-06-21 2020-12-22 斯特拉克斯私人有限公司 Machine learning classification method and system based on structure or material segmentation in image
CN110516733A (en) * 2019-08-23 2019-11-29 西南石油大学 A kind of Recognition of Weil Logging Lithology method based on the more twin support vector machines of classification of improvement
CN112102229A (en) * 2020-07-23 2020-12-18 西安交通大学 Intelligent industrial CT detection defect identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Sample-Specific SVM Learning for Person Re-identification";Ying Zhang等;《2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20161231;1278-1287 *
"基于Hessian矩阵和熵的CT序列图像裂缝分割方法";王慧倩等;《仪器仪表学报》;20160831;第37卷(第8期);1800-1801 *

Also Published As

Publication number Publication date
CN112991370A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN109522908B (en) Image significance detection method based on region label fusion
CN112991370B (en) Rock core CT crack identification and segmentation method
CN109840483B (en) Landslide crack detection and identification method and device
CN102687007A (en) High-throughput biomarker segmentation utilizing hierarchical normalized cuts
CN109977968B (en) SAR change detection method based on deep learning classification comparison
CN108710862B (en) High-resolution remote sensing image water body extraction method
CN108596952B (en) Rapid deep learning remote sensing image target detection method based on candidate region screening
CN113536963B (en) SAR image airplane target detection method based on lightweight YOLO network
CN110598030A (en) Oracle bone rubbing classification method based on local CNN framework
CN113223170B (en) Pore recognition method based on compact sandstone CT image three-dimensional reconstruction
CN109191418A (en) A kind of method for detecting change of remote sensing image based on contraction self-encoding encoder feature learning
CN111738332A (en) Underwater multi-source acoustic image substrate classification method and system based on feature level fusion
CN112949772A (en) Stomach cancer multidimensional feature extraction and analysis system based on image omics
CN112651955A (en) Intestinal tract image identification method and terminal device
CN109145993B (en) SAR image classification method based on multi-feature and non-negative automatic encoder
CN112703531A (en) Generating annotation data for tissue images
CN103065296B (en) High-resolution remote sensing image residential area extraction method based on edge feature
Khamael et al. Using adapted JSEG algorithm with fuzzy C mean for segmentation and counting of white blood cell and nucleus images
CN116309333A (en) WSI image weak supervision pathological analysis method and device based on deep learning
CN114612738B (en) Training method of cell electron microscope image segmentation model and organelle interaction analysis method
CN114998876A (en) Sea-land transition phase shale streak layer structure identification method based on rock slice image
CN113887652B (en) Remote sensing image weak and small target detection method based on morphology and multi-example learning
CN114862883A (en) Target edge extraction method, image segmentation method and system
CN109409375B (en) SAR image semantic segmentation method based on contour structure learning model
CN111814887A (en) Image feature extraction method based on subspace learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant