CN112991370A - Rock core CT crack identification and segmentation method - Google Patents

Rock core CT crack identification and segmentation method Download PDF

Info

Publication number
CN112991370A
CN112991370A CN202110378251.3A CN202110378251A CN112991370A CN 112991370 A CN112991370 A CN 112991370A CN 202110378251 A CN202110378251 A CN 202110378251A CN 112991370 A CN112991370 A CN 112991370A
Authority
CN
China
Prior art keywords
image
workpiece
matrix
crack
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110378251.3A
Other languages
Chinese (zh)
Other versions
CN112991370B (en
Inventor
邹永宁
张智斌
余浩松
李琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110378251.3A priority Critical patent/CN112991370B/en
Publication of CN112991370A publication Critical patent/CN112991370A/en
Application granted granted Critical
Publication of CN112991370B publication Critical patent/CN112991370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Abstract

The invention relates to a rock core CT crack identification and segmentation method, and belongs to the technical field of image processing. Dividing a data set into a test set and a training set according to the proportion of 1:6, and performing self-adaptive median filtering and Hessian matrix linear filtering on all images to enhance the image quality; dividing the training set image into a plurality of small images with consistent sizes, extracting Hu invariant moment features, gray level co-occurrence matrix features and gray level mean features from the sub-images obtained by division, and training an SVM prediction model by using the obtained feature matrix; dividing the image of the test set into image blocks with the same size, extracting the same characteristics, and predicting the image blocks by using the obtained SVM model to complete the rough positioning of cracks; and finally, segmenting the image block containing the crack by using an active contour segmentation method to obtain a final crack segmentation result. The method can accurately and quickly segment crack defects in the core CT image, and has strong anti-interference performance.

Description

Rock core CT crack identification and segmentation method
Technical Field
The invention belongs to the technical field of image processing, and relates to a rock core CT crack identification and segmentation method.
Background
Computer Tomography (CT) utilizes attenuation information of X-rays passing through different substances, obtains internal density distribution of a measured object by adopting a certain reconstruction algorithm, has clear image and high resolution, is one of the world-recognized advanced nondestructive detection means, and is widely applied to the fields of aerospace, medicine, biology, industry, agriculture, electronics, archaeology and the like.
CT technology is widely used in the field of oil and gas exploration, particularly in core analysis. From the core CT scanning image, the storage space type (such as information of cracks, karst caves, dissolving holes and the like) of the reservoir can be identified, and the information of the density and the opening degree of the crack development, the distribution condition of the dissolving caves, the porosity of the core and the like can be identified. To perform various analyses of the core, it is first necessary to segment the cracks in the CT images of the core.
Crack segmentation has been the subject of much research. For example: OliveiraH et al use a dynamic threshold method and entropy to perform road image crack segmentation; landstrom et al extracted longitudinal cracks in road surface images using morphological methods and logistic regression models; LiuL et al studied CT image crack segmentation based on wavelet transform and C-V models; LiZ et al propose a CT image fracture segmentation method based on Finite Plane Integral Transform (FPIT) and planelet; the core CT image crack segmentation has been studied so far, for example: the core CT image is segmented by the Yangtze river through an improved level set algorithm; wuxiayuan et al locate the crack position using Faster R-CNN and then segment the image using improved threshold segmentation; and the He-Feng et al utilizes an ant colony clustering algorithm to segment cracks in the core CT image.
Disclosure of Invention
In view of the above, the present invention is directed to a method for identifying and segmenting a core CT crack. In order to further improve the segmentation accuracy, robustness and automation degree of the core crack segmentation algorithm, the core CT image crack segmentation method based on the support vector machine and the active contour is characterized in that a gray level co-occurrence matrix, a Hu invariant moment and a gray level mean value are used as features, wherein the gray level co-occurrence matrix comprises 6 features which are respectively as follows: contrast, correlation, energy, inverse variance, mean sum, Hu invariant moment contains 7 features with rotation, scaling and translation invariance. The rock core CT image crack segmentation method based on the support vector machine and the active contour has the advantages that the crack position in the CT image can be preliminarily positioned according to the characteristics of the crack without a large amount of training data, the image segmentation area is reduced, and the accuracy is high. And then obtaining a final fracture segmentation result by utilizing a movable contour segmentation method.
In order to achieve the purpose, the invention provides the following technical scheme:
a core CT crack identification and segmentation method comprises the following steps:
s1: dividing a sample image into a test sample and a training sample according to the proportion of 1:6, and performing adaptive median filtering and Hessian matrix linear filtering on all pictures to realize noise reduction and enhancement of the sample image;
s2: dividing the training set image in the S1 into 32 x 32 image blocks, classifying the image blocks into a background subgraph, a workpiece edge subgraph, a crack-free workpiece internal subgraph and a crack workpiece internal subgraph, extracting feature matrixes for each class of subgraph, and respectively naming the feature matrixes as data0, data1, data2 and data 3;
s3: three SVM models were trained using the feature matrix obtained in S2: SVM1, SVM2 and SVM 3;
the SVM1 is used for distinguishing an image workpiece area from a background area, wherein a positive sample is a workpiece area sub-image data1+ data2+ data3, and a negative sample is a background area sub-image data 0;
the SVM2 is used for distinguishing an edge region of the workpiece from an internal region of the workpiece, wherein a positive sample is a workpiece internal region sub-graph data2+ data3, and a negative sample is a workpiece edge sub-graph data 1;
the SVM3 is used for distinguishing a cracked workpiece sub-graph from a non-cracked workpiece sub-graph, wherein a positive sample is a cracked workpiece sub-graph data2, and a negative sample is a non-cracked workpiece sub-graph data 3;
s4: dividing the test image into image blocks with the same size as that in S2 and extracting a feature matrix;
s5: predicting the feature matrix obtained in the step S4 by using the SVM1 model obtained in the step S3, and distinguishing a workpiece area and a background area;
classifying the image blocks of the non-background area into an edge subgraph of the workpiece and an inner subgraph of the workpiece by using an SVM2 model;
classifying the sub-images in the workpiece into crack regions and crack-free regions by using an SVM3 model;
s6: reserving the image blocks predicted as the workpiece subgraphs with cracks in the step S5, and setting the pixels of the rest image blocks as 0 to obtain an image P1 with a reduced crack range;
s7: in order to prevent the cracks from occurring at the sub-image edge positions just, in order to retain the cracks of the images as much as possible, the initial sub-sampling positions of the test images in the S4 are respectively shifted downwards, rightwards and rightwards downwards by half the window width, and the S5 and the S6 are repeated to obtain images P2, P3 and P4;
s8: adding the previously obtained images P1, P2, P3 and P4 to obtain an image region containing a crack;
s9: the crack in S8 is segmented using the active contour segmentation method.
Optionally, in S4, the SVM classifier is constructed by using a feature matrix with 14 elements in each row as a training set, and normalizing the training set and the extracted test set, where the test set is a feature matrix of [ N × 14], where N is a total number of subgraphs of the training set, and 14 elements in each row are 14 feature parameters of the corresponding subgraph respectively.
Optionally, in S4, the number of feature parameters extracted from each sub-map is 14: feature 1 to feature 14;
feature 1 is the mean value of the gray levels of the image:
mean gray level: the average gray scale of the reaction image is the average value of all pixel values in the image;
Figure BDA0003011675480000031
wherein mean represents the average value of the image pixels, x represents the rows of the original image pixels, y represents the columns of the original image pixels, f represents the original image matrix, and N represents the number of pixel points in the image matrix;
the characteristics 2-7 are gray level co-occurrence matrix characteristics of the images:
contrast ratio: the definition of the image is reflected, the distribution condition of the matrix value of the image and the local change of the image are reflected, and the larger the value of the matrix value is, the stronger the texture element contrast is, the deeper the groove is, and the clearer the image is;
Figure BDA0003011675480000032
where CON represents the contrast of the image, i represents the rows of the symbiotic matrix, j represents the columns of the symbiotic matrix, p represents the gray level symbiotic matrix obtained after the gray level compression of the original image matrix, and N represents the gray level symbiotic matrixgExpressing the gray level, d expressing the absolute value of the difference between i and j, and n expressing the current iteration number;
correlation: reflecting local gray level correlation, measuring the similarity of the gray level of the image in the row or column direction, wherein the larger the value of the local gray level correlation is, the larger the correlation is;
Figure BDA0003011675480000033
where CORRLN represents the image correlation, μiRepresents piMean value of (d) (. mu.)jRepresents pjMean value of (a)i 2Represents piThe variance of (a) is determined,
Figure BDA0003011675480000034
represents pjThe variance of (a) is determined,
Figure BDA0003011675480000035
the sum of the ith row of data for the co-occurrence matrix,
Figure BDA0003011675480000036
the sum of j column data of the symbiotic matrix;
Figure BDA0003011675480000037
Figure BDA0003011675480000038
energy: reflecting the uniformity degree and the texture thickness of the gray level distribution of the image; if the element values of the gray level co-occurrence matrix are similar, the energy is small, and the texture is fine; if some of the values are large, and others are small, the energy value is large;
Figure BDA0003011675480000039
wherein ASM is an angular second moment representing the energy of the image;
inverse variance: measuring local texture change of the image; the larger the value is, the more regular the image texture is;
Figure BDA0003011675480000041
in the formula: IDM denotes the inverse variance;
variance: reflecting the period of the texture, wherein the larger the value is, the larger the period of the texture is;
Figure BDA0003011675480000042
in the formula: m represents the mean of p (i, j);
and (3) mean sum: reflecting the light and shade depth of the image, which is the measurement of the average gray value of the pixel points in the image area;
Figure BDA0003011675480000043
in the formula: k represents the sum of subscripts i and j;
features 8-14 are the Hu invariant moment features of the image.
The invention has the beneficial effects that: the method of using the support vector machine reduces the region range of the crack, so that the algorithm reduces the noise interference in the image and improves the operation speed and the segmentation precision.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a method and flow diagram according to the present invention;
FIG. 2 is a core CT image before preprocessing;
FIG. 3 is a CT image of a pretreated core;
FIG. 4 is a diagram of the effect of the predictive model SVM1 in identifying background and non-background regions;
FIG. 5 is a graph of the effect of a prediction model SVM2 in identifying the edge and non-edge regions of a core;
FIG. 6 is a graph illustrating the effect of predictive model SVM3 in identifying cracked and non-cracked regions within a sample region;
FIG. 7 is a sample mask map obtained from a single sampling;
FIG. 8 shows the result of the superposition of the mask patterns obtained by 4 sampling;
fig. 9 shows the results obtained by crack division.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, a core CT image crack segmentation method based on a support vector machine and active contour segmentation is performed according to the following steps:
s1: dividing the sample image into a test sample and a training sample according to the proportion of 1:6, and performing adaptive median filtering and Hessian matrix linear filtering on all pictures to realize noise reduction and enhancement of the sample image.
S2: and (3) dividing the training set image in the S1 into 32 x 32 image blocks, classifying the image blocks into a background subgraph, a workpiece edge subgraph, a crack-free workpiece internal subgraph and a crack workpiece internal subgraph, respectively storing the subgraphs in each folder in different folders, extracting feature matrices from the subgraphs in each folder, respectively naming the subgraphs as data0, data1, data2 and data3, and storing the subgraphs in an Excel file.
S3: three SVM models were trained using the feature matrix obtained in S2 (parameters c and g required for SVM training were obtained by an automatic optimization algorithm). The SVM1 is used for distinguishing an image workpiece area from a background area, wherein a positive sample is a workpiece area sub-image (data1+ data2+ data3), and a negative sample is a background area sub-image (data 0); the SVM2 is used for distinguishing the edge region of the workpiece from the internal region of the workpiece, wherein a positive sample is a workpiece internal region sub-image (data2+ data3), and a negative sample is a workpiece edge sub-image (data 1); the SVM3 is used for distinguishing a cracked workpiece sub-graph from a non-cracked workpiece sub-graph, wherein a positive sample is the cracked workpiece sub-graph (data2), and a negative sample is the non-cracked workpiece sub-graph (data 3).
S4: the test image is segmented into image blocks of the same size 32 x 32 as in S2 and the same features as the training set are extracted.
S5: predicting the feature matrix obtained in the step S4 by using the SVM1 model obtained in the step S3, and distinguishing a workpiece area and a background area; classifying the image blocks of the non-background area into an edge subgraph of the workpiece and an inner subgraph of the workpiece by using an SVM2 model; the interior subgraph of the workpiece is then classified into cracked and non-cracked regions using the SVM3 model.
S6: and reserving the image blocks predicted as the workpiece subgraphs with cracks in the step S5, and setting the pixels of the rest image blocks as 0 to obtain an image P1 with a reduced crack range.
S7: to prevent the occurrence of cracks at the sub-picture edge positions, and to preserve as many cracks as possible, the initial sub-sample positions of the test image in S4 are shifted down, right and right down by half a window width, respectively, and S5 and S6 are repeated to obtain images P2, P3, P4.
S8: the images P1, P2, P3, and P4 obtained above were added to obtain an image region including a crack.
S9: the crack in S8 is segmented using the active contour segmentation method.
The method of the present invention is described in detail below with reference to the accompanying drawings, and it is to be noted that the described embodiments are only intended to facilitate the understanding of the method of the present invention, and do not limit it in any way.
As shown in fig. 2, the original core CT image is noisy.
FIG. 3 is a CT image of a pretreated core;
as shown in fig. 4, 5, and 6, the model trained by the features in the method can better identify the non-background region, the image edge, and the image crack of the image.
As shown in fig. 8, the image obtained by 4 times of sampling can better retain the crack information of the original CT image for the next segmentation.
Fig. 9 shows the results obtained by crack division.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (3)

1. A rock core CT crack identification and segmentation method is characterized by comprising the following steps: the method comprises the following steps:
s1: dividing a sample image into a test sample and a training sample according to the proportion of 1:6, and performing adaptive median filtering and Hessian matrix linear filtering on all pictures to realize noise reduction and enhancement of the sample image;
s2: dividing the training set image in the S1 into 32 x 32 image blocks, classifying the image blocks into a background subgraph, a workpiece edge subgraph, a crack-free workpiece internal subgraph and a crack workpiece internal subgraph, extracting feature matrixes for each class of subgraph, and respectively naming the feature matrixes as data0, data1, data2 and data 3;
s3: three SVM models were trained using the feature matrix obtained in S2: SVM1, SVM2 and SVM 3;
the SVM1 is used for distinguishing an image workpiece area from a background area, wherein a positive sample is a workpiece area sub-image data1+ data2+ data3, and a negative sample is a background area sub-image data 0;
the SVM2 is used for distinguishing an edge region of the workpiece from an internal region of the workpiece, wherein a positive sample is a workpiece internal region sub-graph data2+ data3, and a negative sample is a workpiece edge sub-graph data 1;
the SVM3 is used for distinguishing a cracked workpiece sub-graph from a non-cracked workpiece sub-graph, wherein a positive sample is a cracked workpiece sub-graph data2, and a negative sample is a non-cracked workpiece sub-graph data 3;
s4: dividing the test image into image blocks with the same size as that in S2 and extracting a feature matrix;
s5: predicting the feature matrix obtained in the step S4 by using the SVM1 model obtained in the step S3, and distinguishing a workpiece area and a background area;
classifying the image blocks of the non-background area into an edge subgraph of the workpiece and an inner subgraph of the workpiece by using an SVM2 model;
classifying the sub-images in the workpiece into crack regions and crack-free regions by using an SVM3 model;
s6: reserving the image blocks predicted as the workpiece subgraphs with cracks in the step S5, and setting the pixels of the rest image blocks as 0 to obtain an image P1 with a reduced crack range;
s7: in order to prevent the cracks from occurring at the sub-image edge positions just, in order to retain the cracks of the images as much as possible, the initial sub-sampling positions of the test images in the S4 are respectively shifted downwards, rightwards and rightwards downwards by half the window width, and the S5 and the S6 are repeated to obtain images P2, P3 and P4;
s8: adding the previously obtained images P1, P2, P3 and P4 to obtain an image region containing a crack;
s9: the crack in S8 is segmented using the active contour segmentation method.
2. The core CT crack identification and segmentation method as recited in claim 1, wherein the core CT crack identification and segmentation method comprises the following steps: in S4, the SVM classifier is constructed by using a feature matrix with 14 elements in each row as a training set, and normalizing the training set and the extracted test set, where the test set is a feature matrix of [ N × 14], where N is the total number of subgraphs of the training set, and 14 elements in each row are 14 feature parameters of the corresponding subgraph respectively.
3. The core CT crack identification and segmentation method as recited in claim 1 or 2, wherein: in S4, the number of feature parameters extracted from each sub-map is 14: feature 1 to feature 14;
feature 1 is the mean value of the gray levels of the image:
mean gray level: the average gray scale of the reaction image is the average value of all pixel values in the image;
Figure FDA0003011675470000021
wherein mean represents the average value of the image pixels, x represents the rows of the original image pixels, y represents the columns of the original image pixels, f represents the original image matrix, and N represents the number of pixel points in the image matrix;
the characteristics 2-7 are gray level co-occurrence matrix characteristics of the images:
contrast ratio: the definition of the image is reflected, the distribution condition of the matrix value of the image and the local change of the image are reflected, and the larger the value of the matrix value is, the stronger the texture element contrast is, the deeper the groove is, and the clearer the image is;
Figure FDA0003011675470000022
where CON represents the contrast of the image, i represents the rows of the symbiotic matrix, j represents the columns of the symbiotic matrix, p represents the gray level symbiotic matrix obtained after the gray level compression of the original image matrix, and N represents the gray level symbiotic matrixgExpressing the gray level, d expressing the absolute value of the difference between i and j, and n expressing the current iteration number;
correlation: reflecting local gray level correlation, measuring the similarity of the gray level of the image in the row or column direction, wherein the larger the value of the local gray level correlation is, the larger the correlation is;
Figure FDA0003011675470000023
where CORRLN represents the image correlation, μiRepresents piMean value of (d) (. mu.)jRepresents pjThe average value of (a) of (b),
Figure FDA0003011675470000024
represents piThe variance of (a) is determined,
Figure FDA0003011675470000025
represents pjThe variance of (a) is determined,
Figure FDA0003011675470000026
the sum of the ith row of data for the co-occurrence matrix,
Figure FDA0003011675470000027
the sum of j column data of the symbiotic matrix;
Figure FDA0003011675470000028
Figure FDA0003011675470000029
energy: reflecting the uniformity degree and the texture thickness of the gray level distribution of the image; if the element values of the gray level co-occurrence matrix are similar, the energy is small, and the texture is fine; if some of the values are large, and others are small, the energy value is large;
Figure FDA0003011675470000031
wherein ASM is an angular second moment representing the energy of the image;
inverse variance: measuring local texture change of the image; the larger the value is, the more regular the image texture is;
Figure FDA0003011675470000032
in the formula: IDM denotes the inverse variance;
variance: reflecting the period of the texture, wherein the larger the value is, the larger the period of the texture is;
Figure FDA0003011675470000033
in the formula: m represents the mean of p (i, j);
and (3) mean sum: reflecting the light and shade depth of the image, which is the measurement of the average gray value of the pixel points in the image area;
Figure FDA0003011675470000034
wherein i + j ═ k
In the formula: k represents the sum of subscripts i and j;
features 8-14 are the Hu invariant moment features of the image.
CN202110378251.3A 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method Active CN112991370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110378251.3A CN112991370B (en) 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110378251.3A CN112991370B (en) 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method

Publications (2)

Publication Number Publication Date
CN112991370A true CN112991370A (en) 2021-06-18
CN112991370B CN112991370B (en) 2022-11-25

Family

ID=76339500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110378251.3A Active CN112991370B (en) 2021-04-08 2021-04-08 Rock core CT crack identification and segmentation method

Country Status (1)

Country Link
CN (1) CN112991370B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463324A (en) * 2022-02-23 2022-05-10 中国石油大学(华东) Rock core image crack identification method based on Hessian matrix filtering
CN117152373A (en) * 2023-11-01 2023-12-01 中国石油大学(华东) Core-level pore network model construction method considering cracks
RU2815488C1 (en) * 2022-12-22 2024-03-18 Саутвест Петролеум Юниверсити Method of recognizing cracks in scanned image of boring core barrel

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133630A (en) * 2016-02-29 2017-09-05 中国石油化工股份有限公司 A kind of method that carbonate porosity type is judged based on scan image
CN108364278A (en) * 2017-12-21 2018-08-03 中国石油大学(北京) A kind of rock core crack extract method and system
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN110516733A (en) * 2019-08-23 2019-11-29 西南石油大学 A kind of Recognition of Weil Logging Lithology method based on the more twin support vector machines of classification of improvement
CN111582387A (en) * 2020-05-11 2020-08-25 吉林大学 Rock spectral feature fusion classification method and system
CN112102229A (en) * 2020-07-23 2020-12-18 西安交通大学 Intelligent industrial CT detection defect identification method based on deep learning
CN112116609A (en) * 2019-06-21 2020-12-22 斯特拉克斯私人有限公司 Machine learning classification method and system based on structure or material segmentation in image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133630A (en) * 2016-02-29 2017-09-05 中国石油化工股份有限公司 A kind of method that carbonate porosity type is judged based on scan image
CN108364278A (en) * 2017-12-21 2018-08-03 中国石油大学(北京) A kind of rock core crack extract method and system
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN112116609A (en) * 2019-06-21 2020-12-22 斯特拉克斯私人有限公司 Machine learning classification method and system based on structure or material segmentation in image
US20200401843A1 (en) * 2019-06-21 2020-12-24 StraxCorp Pty. Ltd. Method and system for machine learning classification based on structure or material segmentation in an image
CN110516733A (en) * 2019-08-23 2019-11-29 西南石油大学 A kind of Recognition of Weil Logging Lithology method based on the more twin support vector machines of classification of improvement
CN111582387A (en) * 2020-05-11 2020-08-25 吉林大学 Rock spectral feature fusion classification method and system
CN112102229A (en) * 2020-07-23 2020-12-18 西安交通大学 Intelligent industrial CT detection defect identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YING ZHANG等: ""Sample-Specific SVM Learning for Person Re-identification"", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
王慧倩等: ""基于Hessian矩阵和熵的CT序列图像裂缝分割方法"", 《仪器仪表学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463324A (en) * 2022-02-23 2022-05-10 中国石油大学(华东) Rock core image crack identification method based on Hessian matrix filtering
CN114463324B (en) * 2022-02-23 2023-05-05 中国石油大学(华东) Core image crack identification method based on hessian matrix filtering
RU2815488C1 (en) * 2022-12-22 2024-03-18 Саутвест Петролеум Юниверсити Method of recognizing cracks in scanned image of boring core barrel
RU2815488C9 (en) * 2022-12-22 2024-04-25 Саутвест Петролеум Юниверсити Method of recognizing cracks in scanned image of boring core barrel
CN117152373A (en) * 2023-11-01 2023-12-01 中国石油大学(华东) Core-level pore network model construction method considering cracks
CN117152373B (en) * 2023-11-01 2024-02-02 中国石油大学(华东) Core-level pore network model construction method considering cracks

Also Published As

Publication number Publication date
CN112991370B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN109522908B (en) Image significance detection method based on region label fusion
CN112700429B (en) Airport pavement underground structure disease automatic detection method based on deep learning
CN112991370B (en) Rock core CT crack identification and segmentation method
CN102687007A (en) High-throughput biomarker segmentation utilizing hierarchical normalized cuts
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN113536963B (en) SAR image airplane target detection method based on lightweight YOLO network
CN108596952B (en) Rapid deep learning remote sensing image target detection method based on candidate region screening
CN109977968B (en) SAR change detection method based on deep learning classification comparison
CN108122221A (en) The dividing method and device of diffusion-weighted imaging image midbrain ischemic area
CN108710862B (en) High-resolution remote sensing image water body extraction method
CN110598030A (en) Oracle bone rubbing classification method based on local CNN framework
CN107424153B (en) Face segmentation method based on deep learning and level set
CN113223170B (en) Pore recognition method based on compact sandstone CT image three-dimensional reconstruction
CN112703531A (en) Generating annotation data for tissue images
CN103065296B (en) High-resolution remote sensing image residential area extraction method based on edge feature
CN112613354A (en) Heterogeneous remote sensing image change detection method based on sparse noise reduction self-encoder
CN116309333A (en) WSI image weak supervision pathological analysis method and device based on deep learning
CN114612738B (en) Training method of cell electron microscope image segmentation model and organelle interaction analysis method
CN114998876A (en) Sea-land transition phase shale streak layer structure identification method based on rock slice image
CN114862883A (en) Target edge extraction method, image segmentation method and system
CN109409375B (en) SAR image semantic segmentation method based on contour structure learning model
Xing et al. Research on crack extraction based on the improved tensor voting algorithm
Tomasila Sand Soil Image Processing Using the Watershed Transform and Otsu Thresholding Based on Gaussian Noise
CN116051966A (en) Pen and stone image recognition method based on deep learning network and model training method thereof
CN111814887A (en) Image feature extraction method based on subspace learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant