CN112784894A - Automatic labeling method for rock slice microscopic image - Google Patents

Automatic labeling method for rock slice microscopic image Download PDF

Info

Publication number
CN112784894A
CN112784894A CN202110063235.5A CN202110063235A CN112784894A CN 112784894 A CN112784894 A CN 112784894A CN 202110063235 A CN202110063235 A CN 202110063235A CN 112784894 A CN112784894 A CN 112784894A
Authority
CN
China
Prior art keywords
image
rock slice
automatic labeling
rock
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110063235.5A
Other languages
Chinese (zh)
Other versions
CN112784894B (en
Inventor
陈雁
易雨
苗波
金光婷
黄玉楠
安玉钏
廖梦羽
王柯
代永芳
李祉呈
常国飚
阳旭菻
李平
钟学燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202110063235.5A priority Critical patent/CN112784894B/en
Publication of CN112784894A publication Critical patent/CN112784894A/en
Application granted granted Critical
Publication of CN112784894B publication Critical patent/CN112784894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an automatic marking method of a rock slice microscopic image, which is improved aiming at the link of manually dotting and drawing a picture frame of the image, realizes the automatic marking of a complex rock slice image, classifies and marks different particle types in the same image, performs dotting and drawing on each particle, and gives a label name corresponding to the particle; the invention realizes the self-animation point-drawing frame of the rock slice image, greatly saves the time of expert marking, can effectively assist the expert in carrying out batch complex rock slice image marking work, has higher accuracy and greatly improves the working efficiency.

Description

Automatic labeling method for rock slice microscopic image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an automatic labeling method for a rock slice microscopic image.
Background
The rock slice microscopic image is an image obtained by grinding a rock sample into a slice having a thickness of 0.03 mm, placing the slice on a polarizing microscope stage, and taking a photograph with a high-definition camera by a slice appraiser. In the field of geology, rock slice identification is performed by observing and analyzing the composition and content of minerals in rock slice images, and classifying and naming the same. However, in general, a rock slice image has numerous mineral particles, low contrast between different particles, and a complex internal microstructure of the same particle, which causes a difficult sandstone image identification work, and has a high requirement on an identification worker, and meanwhile, the identification conclusion has strong subjectivity, often different identification experts have different conclusions, and the identification efficiency is also low, and generally, an identification worker with abundant experience has only a numerical value for the sandstone slice that can be identified every day.
In the identification process, related labeled data is more important, original data is changed into algorithm available data in data labeling, and the traditional data labeling mainly depends on manual labeling of data such as texts and pictures, so that a large amount of manpower and material resources are consumed, and the requirement cannot be met gradually. Once complicated outline drawing is involved, the difficulty of manual labeling is greatly increased, so that the system provides an automatic and intelligent labeling method to assist manual labeling so as to improve the labeling efficiency.
At present, marking of the rock slices depends on manual marking of relevant experts, namely, the types of all particles are accurately judged on an image full of a plurality of rock particles and impurities, then a dot picture frame is described manually to draw the particle outlines, and finally corresponding labels are marked on different particles. The manual labeling method consumes time and energy of experts, and occasionally errors occur; when an expert points the particle outline through the marking software, if the particle in a picture exceeds 100, the long-time point memory is problematic, and the whole process is limited in energy.
Disclosure of Invention
Aiming at the defects in the prior art, the automatic marking method for the rock slice microscopic image provided by the invention solves the problems in the prior art.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an automatic labeling method for a rock slice microscopic image comprises the following steps:
s1, acquiring an orthogonal polarization image and a single polarization image of the rock slice;
s2, carrying out image fusion processing on the orthogonal polarization image and the single polarization image to obtain a new fused image;
s3, performing superpixel segmentation on the fused image to obtain a binary image with particles and gap filler;
s4, performing edge detection on each area block in the binary image, and determining a corresponding edge coordinate point;
s5, training a U-net image semantic segmentation network based on different types of binary images with edge coordinate points;
and S6, taking the binary image to be labeled with the grains and the gap filler as the input of the U-net image semantic segmentation network, obtaining the output result with the grain type label of the rock slice image, and realizing automatic labeling of the rock slice microscopic image.
Further, the step S2 is specifically:
s21, performing illumination equalization processing on the single-polarized image with uneven illumination;
s22, performing smooth denoising processing on the single-polarized-light image subjected to illumination equalization processing;
s23, taking the single polarization image subjected to the smoothing and denoising processing as a reference image, and carrying out image alignment processing on the reference image and the orthogonal polarization image;
and S24, carrying out image fusion on the images subjected to the image alignment processing to obtain new fused images.
Further, the step S3 is specifically:
s31, extracting a gray level co-occurrence matrix of the fused new image by adopting a GLCM algorithm, and calculating a self-adaptive K value;
s32, performing edge detection segmentation on a new image fused by a self-adaptive K-value superpixel segmentation SLCM algorithm;
s33, performing region fusion on each region of the segmented image based on color and distance, combining the segmentation rules of polycrystalline bodies and rock debris, and combining quartz and feldspar particles of the polycrystalline bodies and rock debris particles to obtain a final image segmentation result;
and S34, forming a binary image with grains and fillers based on the image segmentation result.
Further, the gray level co-occurrence matrix in step S31 is P (i, j, d, θ), where d is a spatial distance, θ is a direction angle, i is a row where the start point is located, and j is a column where the start point is located;
the adaptive K value in step S31 is:
Figure BDA0002903123210000031
in the formula, X and Y are the length and width of an image, Energy is entropy, Contrast is Contrast, Asm is Energy, and Correlation is a Correlation parameter.
Further, the step S32 is specifically:
a1, uniformly distributing seed points in the fused new image according to the set number K of the super pixels;
a2, reselecting the seed points in the n x n area of each currently determined seed point;
a3, searching pixel points in the neighborhood of the reselected seed point, and distributing class labels to the pixel points;
and A4, repeating the steps A2-A3 until the searched clustering center of the pixel points does not change any more, and realizing the edge detection segmentation of the image based on the distribution class labels.
Further, the step S4 is specifically:
s41, carrying out edge detection on each area block in the binary image to obtain edge coordinates of each particle;
and S42, screening the edge coordinates of each particle based on a gradient screening method, and deleting redundant coordinates to obtain the final edge coordinate point of each particle.
Further, the step S5 is specifically:
s51, connecting the coordinate points corresponding to the area blocks to form a closed independent area, and adding labels and attributes to the area blocks;
s52, filling different colors in different types of independent areas based on the labels and the attributes of the independent areas;
and S53, training the U-net image semantic segmentation network based on the binary images with the color filling completed.
The invention has the beneficial effects that:
(1) the invention can automatically label the complex rock slice image, realize the classification labeling of different particle types in the same graph, draw points and trace frames for each particle, and endow the corresponding label name for the particle;
(2) the invention realizes the self-animation point-drawing frame of the rock slice image, greatly saves the time for marking by experts, can effectively assist the experts in carrying out batch complex rock slice image marking work, has higher accuracy and greatly improves the working efficiency.
Drawings
FIG. 1 is a flow chart of an automatic labeling method for a rock slice microscopic image provided by the invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, an automatic labeling method for a rock slice microscopic image includes the following steps:
s1, acquiring an orthogonal polarization image and a single polarization image of the rock slice;
s2, carrying out image fusion processing on the orthogonal polarization image and the single polarization image to obtain a new fused image;
s3, performing superpixel segmentation on the fused image to obtain a binary image with particles and gap filler;
s4, performing edge detection on each area block in the binary image, and determining a corresponding edge coordinate point;
s5, training a U-net image semantic segmentation network based on different types of binary images with edge coordinate points;
and S6, taking the binary image to be labeled with the grains and the gap filler as the input of the U-net image semantic segmentation network, obtaining the output result with the grain type label of the rock slice image, and realizing automatic labeling of the rock slice microscopic image.
The step S2 is specifically:
s21, performing illumination equalization processing on the single-polarized image with uneven illumination;
wherein, the illumination equalization processing is carried out for enhancing the contrast set of the image and keeping the detail part of the image;
s22, performing smooth denoising processing on the single-polarized-light image subjected to illumination equalization processing to achieve the purpose of edge-preserving denoising;
s23, taking the single polarization image subjected to the smoothing and denoising processing as a reference image, and carrying out image alignment processing on the reference image and the orthogonal polarization image;
specifically, the processed single-polarization image is used as a reference image, and points at the same position in space are in one-to-one correspondence with a plurality of orthogonal polarization images of the same rock slice shot at different angles after three steps of feature extraction (ORB), feature matching (bidirectional cross matching) and image transformation (RANSAC), so that the aim of automatically aligning the plurality of misaligned orthogonal polarization images with reference to the single-polarization image is fulfilled;
and S24, carrying out image fusion on the images subjected to the image alignment processing to obtain new fused images.
The step S3 is specifically:
s31, extracting a gray level co-occurrence matrix of the fused new image by adopting a GLCM algorithm, and calculating a self-adaptive K value;
the gray level co-occurrence matrix is P (i, j, d, theta), wherein d is a spatial distance, theta is a direction angle, i is a row where the starting point is located, and j is a column where the starting point is located;
the adaptive K value in step S31 is:
Figure BDA0002903123210000061
in the formula, X and Y are the length and width of an image, Energy is entropy, Contrast is Contrast, Asm is Energy, and Correlation is a Correlation parameter.
Wherein, Contrast ═ Σij(i-j)2P(i,j)
Asm=∑ijP(i,j)2
Figure BDA0002903123210000062
S32, performing edge detection segmentation on a new image fused by a self-adaptive K-value superpixel segmentation SLCM algorithm;
s33, performing region fusion on each region of the segmented image based on color and distance, combining the segmentation rules of polycrystalline bodies and rock debris, and combining quartz and feldspar particles of the polycrystalline bodies and rock debris particles to obtain a final image segmentation result;
specifically, during regional fusion, the distance from each searched pixel point to the seed point (clustering center) is respectively calculated based on the color and the distance, and then quartz and feldspar particles of the polycrystalline body and rock debris particles are combined by combining the partition rules of the polycrystalline body and the rock debris;
and S34, forming a binary image with grains and fillers based on the image segmentation result.
In the generated binary image, white is a particle and black is a gap filler.
The step S32 is specifically:
a1, uniformly distributing seed points in the fused new image according to the set number K of the super pixels;
assuming that N pixel points are totally arranged in an image and are pre-divided into K super pixels with the same size, the size of each super pixel is N/K;
a2, reselecting the seed points in the n x n area of each currently determined seed point;
the specific method comprises the following steps: calculating gradient values of all pixel points in the neighborhood, and moving the seed point to the place with the minimum gradient in the neighborhood;
a3, searching pixel points in the neighborhood of the reselected seed point, and distributing class labels to the pixel points;
specifically, the search range of the SLIC algorithm is limited to 2S × 2S, which can accelerate algorithm convergence;
and A4, repeating the steps A2-A3 until the searched clustering center of the pixel points does not change any more, and realizing the edge detection segmentation of the image based on the distribution class labels.
The step S4 is specifically:
s41, carrying out edge detection on each area block in the binary image to obtain edge coordinates of each particle;
and S42, screening the edge coordinates of each particle based on a gradient screening method, and deleting redundant coordinates to obtain the final edge coordinate point of each particle.
For example, in the process of screening an irregular polygon boundary point, if the boundary line is a straight line, only coordinate points at two ends of the straight line need to be retained, and if the boundary line is irregular, coordinates of points with extremely large gradient differences are retained.
The step S5 is specifically:
s51, connecting the coordinate points corresponding to the area blocks to form a closed independent area, and adding labels and attributes to the area blocks;
s52, filling different colors in different types of independent areas based on the labels and the attributes of the independent areas;
and S53, training the U-net image semantic segmentation network based on the binary images with the color filling completed.
Specifically, in the training process of the U-net image semantic segmentation network, an expert can synchronously carry out human-computer interactive modification on the automatically labeled image so as to improve the accuracy of network labeling.

Claims (7)

1. An automatic labeling method for a rock slice microscopic image is characterized by comprising the following steps:
s1, acquiring an orthogonal polarization image and a single polarization image of the rock slice;
s2, carrying out image fusion processing on the orthogonal polarization image and the single polarization image to obtain a new fused image;
s3, performing superpixel segmentation on the fused image to obtain a binary image with particles and gap filler;
s4, performing edge detection on each area block in the binary image, and determining a corresponding edge coordinate point;
s5, training a U-net image semantic segmentation network based on different types of binary images with edge coordinate points;
and S6, taking the binary image to be labeled with the grains and the gap filler as the input of the U-net image semantic segmentation network, obtaining the output result with the grain type label of the rock slice image, and realizing automatic labeling of the rock slice microscopic image.
2. The automatic labeling method for the rock slice microscopic image according to claim 1, wherein the step S2 specifically comprises:
s21, performing illumination equalization processing on the single-polarized image with uneven illumination;
s22, performing smooth denoising processing on the single-polarized-light image subjected to illumination equalization processing;
s23, taking the single polarization image subjected to the smoothing and denoising processing as a reference image, and carrying out image alignment processing on the reference image and the orthogonal polarization image;
and S24, carrying out image fusion on the images subjected to the image alignment processing to obtain new fused images.
3. The automatic labeling method for the rock slice microscopic image according to claim 1, wherein the step S3 specifically comprises:
s31, extracting a gray level co-occurrence matrix of the fused new image by adopting a GLCM algorithm, and calculating a self-adaptive K value;
s32, performing edge detection segmentation on a new image fused by a self-adaptive K-value superpixel segmentation SLCM algorithm;
s33, performing region fusion on each region of the segmented image based on color and distance, combining the segmentation rules of polycrystalline bodies and rock debris, and combining quartz and feldspar particles of the polycrystalline bodies and rock debris particles to obtain a final image segmentation result;
and S34, forming a binary image with grains and fillers based on the image segmentation result.
4. The automatic labeling method for a rock slice microscopic image according to claim 3, wherein the gray level co-occurrence matrix in step S31 is P (i, j, d, θ), where d is a spatial distance, θ is a direction angle, i is a row where a starting point is located, and j is a column where the starting point is located;
the adaptive K value in step S31 is:
Figure FDA0002903123200000021
in the formula, X and Y are the length and width of an image, Energy is entropy, Contrast is Contrast, Asm is Energy, and Correlation is a Correlation parameter.
5. The automatic labeling method for the rock slice microscopic image according to claim 3, wherein the step S32 is specifically as follows:
a1, uniformly distributing seed points in the fused new image according to the set number K of the super pixels;
a2, reselecting the seed points in the n x n area of each currently determined seed point;
a3, searching pixel points in the neighborhood of the reselected seed point, and distributing class labels to the pixel points;
and A4, repeating the steps A2-A3 until the searched clustering center of the pixel points does not change any more, and realizing the edge detection segmentation of the image based on the distribution class labels.
6. The automatic labeling method for the rock slice microscopic image according to claim 1, wherein the step S4 specifically comprises:
s41, carrying out edge detection on each area block in the binary image to obtain edge coordinates of each particle;
and S42, screening the edge coordinates of each particle based on a gradient screening method, and deleting redundant coordinates to obtain the final edge coordinate point of each particle.
7. The automatic labeling method for the rock slice microscopic image according to claim 1, wherein the step S5 specifically comprises:
s51, connecting the coordinate points corresponding to the area blocks to form a closed independent area, and adding labels and attributes to the area blocks;
s52, filling different colors in different types of independent areas based on the labels and the attributes of the independent areas;
and S53, training the U-net image semantic segmentation network based on the binary images with the color filling completed.
CN202110063235.5A 2021-01-18 2021-01-18 Automatic labeling method for rock slice microscopic image Active CN112784894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110063235.5A CN112784894B (en) 2021-01-18 2021-01-18 Automatic labeling method for rock slice microscopic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110063235.5A CN112784894B (en) 2021-01-18 2021-01-18 Automatic labeling method for rock slice microscopic image

Publications (2)

Publication Number Publication Date
CN112784894A true CN112784894A (en) 2021-05-11
CN112784894B CN112784894B (en) 2022-11-15

Family

ID=75757166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110063235.5A Active CN112784894B (en) 2021-01-18 2021-01-18 Automatic labeling method for rock slice microscopic image

Country Status (1)

Country Link
CN (1) CN112784894B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688956A (en) * 2021-10-26 2021-11-23 西南石油大学 Sandstone slice segmentation and identification method based on depth feature fusion network
CN114897917A (en) * 2022-07-13 2022-08-12 西南石油大学 Multi-level rock casting body slice image segmentation method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146233A (en) * 2017-04-24 2017-09-08 四川大学 Granulometry Segmentation based on petrographic thin section polarisation sequence chart
US10096122B1 (en) * 2017-03-28 2018-10-09 Amazon Technologies, Inc. Segmentation of object image data from background image data
CN108921853A (en) * 2018-06-22 2018-11-30 西安电子科技大学 Image partition method based on super-pixel and clustering of immunity sparse spectrums
CN109087318A (en) * 2018-07-26 2018-12-25 东北大学 A kind of MRI brain tumor image partition method based on optimization U-net network model
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN109934838A (en) * 2019-02-28 2019-06-25 湖北亿咖通科技有限公司 A kind of picture semantic segmentation mask method and device based on super-pixel
CN110119728A (en) * 2019-05-23 2019-08-13 哈尔滨工业大学 Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
US20200082540A1 (en) * 2018-09-07 2020-03-12 Volvo Car Corporation Methods and systems for providing fast semantic proposals for image and video annotation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096122B1 (en) * 2017-03-28 2018-10-09 Amazon Technologies, Inc. Segmentation of object image data from background image data
CN107146233A (en) * 2017-04-24 2017-09-08 四川大学 Granulometry Segmentation based on petrographic thin section polarisation sequence chart
CN108921853A (en) * 2018-06-22 2018-11-30 西安电子科技大学 Image partition method based on super-pixel and clustering of immunity sparse spectrums
CN109087318A (en) * 2018-07-26 2018-12-25 东北大学 A kind of MRI brain tumor image partition method based on optimization U-net network model
US20200082540A1 (en) * 2018-09-07 2020-03-12 Volvo Car Corporation Methods and systems for providing fast semantic proposals for image and video annotation
CN109523566A (en) * 2018-09-18 2019-03-26 姜枫 A kind of automatic division method of Sandstone Slice micro-image
CN109934838A (en) * 2019-02-28 2019-06-25 湖北亿咖通科技有限公司 A kind of picture semantic segmentation mask method and device based on super-pixel
CN110119728A (en) * 2019-05-23 2019-08-13 哈尔滨工业大学 Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
OLAF RONNEB ERGER 等: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
TAO XIE 等: "PSDSD-A Superpixel Generating Method Based on Pixel Saliency Difference and Spatial Distance for SAR Images", 《SENSORS》 *
崔志鹏: "基于深度学习的图像分割、基于深度学习的图像语义分割技术研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
李桂清 等: "一种自适应产生超像素个数的道路图像分割算法", 《科学技术与工程》 *
申肖阳: "基于深度学习的图像语义分割技术研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
钟逸 等: "岩石薄片正交偏光融合图像的颗粒分割方法", 《信息技术与网络安全》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688956A (en) * 2021-10-26 2021-11-23 西南石油大学 Sandstone slice segmentation and identification method based on depth feature fusion network
CN114897917A (en) * 2022-07-13 2022-08-12 西南石油大学 Multi-level rock casting body slice image segmentation method
CN114897917B (en) * 2022-07-13 2022-10-28 西南石油大学 Multi-level rock casting body slice image segmentation method

Also Published As

Publication number Publication date
CN112784894B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN109859171B (en) Automatic floor defect detection method based on computer vision and deep learning
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN108765371B (en) Segmentation method of unconventional cells in pathological section
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN110175982B (en) Defect detection method based on target detection
CN112784894B (en) Automatic labeling method for rock slice microscopic image
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN109598681B (en) No-reference quality evaluation method for image after repairing of symmetrical Thangka
CN109242858B (en) Fabric printing cyclic pattern element segmentation method based on self-adaptive template matching
CN108985363A (en) A kind of cracks in reinforced concrete bridge classifying identification method based on RBPNN
CN113724231A (en) Industrial defect detection method based on semantic segmentation and target detection fusion model
CN114926407A (en) Steel surface defect detection system based on deep learning
CN110648330B (en) Defect detection method for camera glass
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN110111346B (en) Remote sensing image semantic segmentation method based on parallax information
CN113298809A (en) Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
CN111461036A (en) Real-time pedestrian detection method using background modeling enhanced data
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN112884741B (en) Printing apparent defect detection method based on image similarity comparison
CN116681879B (en) Intelligent interpretation method for transition position of optical image boundary layer
CN111161228B (en) Button surface defect detection method based on transfer learning
CN110264434B (en) Single image rain removing method based on low-rank matrix completion
CN116542962A (en) Improved Yolov5m model-based photovoltaic cell defect detection method
CN111832508A (en) DIE _ GA-based low-illumination target detection method
CN113469984A (en) Display panel appearance detection method based on YOLO structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant