CN112927238B - Core sequence image annotation method combining optical flow and watershed segmentation - Google Patents

Core sequence image annotation method combining optical flow and watershed segmentation Download PDF

Info

Publication number
CN112927238B
CN112927238B CN201911240627.3A CN201911240627A CN112927238B CN 112927238 B CN112927238 B CN 112927238B CN 201911240627 A CN201911240627 A CN 201911240627A CN 112927238 B CN112927238 B CN 112927238B
Authority
CN
China
Prior art keywords
image
mark
points
labeling
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911240627.3A
Other languages
Chinese (zh)
Other versions
CN112927238A (en
Inventor
滕奇志
王润涵
何小海
卿粼波
王正勇
吴晓红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201911240627.3A priority Critical patent/CN112927238B/en
Publication of CN112927238A publication Critical patent/CN112927238A/en
Application granted granted Critical
Publication of CN112927238B publication Critical patent/CN112927238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a core sequence image annotation method combining optical flow and watershed segmentation, and mainly relates to a core sequence image annotation technology. The method comprises the following steps: (1) registering the rock core sequence images; (2) setting a mark point for the first frame, and performing mark-based watershed segmentation on the current image; (3) if the segmented image meets the labeling requirement, continuing to execute the step (4), otherwise, adjusting a small number of mark points, and segmenting again; (4) taking the mark points of the current frame meeting the labeling as the characteristic points of the pyramid layering-based improved LK optical flow method for tracking to obtain the mark points of the next frame; (5) and (4) repeating the steps (2) to (4) to finally obtain the label of each sequence image. The method obviously improves the labeling efficiency and the labeling quality by utilizing the characteristics and the interlayer correlation of the core sequence diagram.

Description

Core sequence image annotation method combining optical flow and watershed segmentation
Technical Field
The invention relates to an image processing technology of a rock core sequence image, in particular to a rock core sequence image labeling method combining optical flow and watershed segmentation.
Background
In the field of petroleum geology research, in order to analyze rock pore and particle structures, a 2-dimensional sequence diagram is obtained by CT or FIB-SEM scanning on rock, and pore segmentation is carried out on the sequence image so as to obtain a pore three-dimensional structure of a core. In the related algorithm of segmentation, more and more deep learning algorithms are applied to the field. With the continuous application and development of deep learning in various fields, a deep learning algorithm urgently needs massive labeled data, and for most artificial intelligence projects, the artificial labeling of massive data is a very heavy task. Especially for core sequence images, there are many components of rock, matrix, clay minerals, etc., which makes it time consuming to mark out the pore area. And because the sequence diagram obtained by CT or FIB-SEM scanning is thousands of times, the image size is large, and the burden of manual marking is further increased.
The existing method for semantically segmenting image annotation generally has the following modes: (1) and connecting and filling the labeled outlines, and converting the labeling of the outlines into the labeling of pixel points. Namely, manually drawing a target area by using labeling software such as Labelme, Photoshop and the like, and filling the area. (2) The data annotation platform based on crowdsourcing comprises a plurality of data annotation platforms, wherein annotation results are improved by gathering annotations of a large number of volunteers, the annotation results are improved based on task difficulty and the annotation capacity of the volunteers, and the annotation results are improved by setting a task and reward mechanism. (3) And searching for professional data annotation companies, and performing semantic segmentation image annotation by multiple people. For (1), although such labeling software is convenient and fast to a certain extent, the inter-layer correlation of the sequence diagram is ignored, only a single image can be labeled one by one, and the workload is excessive for the core sequence diagram with large size and large quantity. And for core images with irregular and unsmooth target edges, it is more difficult and time-consuming to accurately delineate edges using software. For (2), since the evaluation labels of the annotators are different, the labeling capabilities are different, and the accuracy of the image obtained after labeling is difficult to guarantee. For (3), there is some guarantee in the labeling accuracy, but the cost is expensive.
Therefore, how to reduce the labeling difficulty, reduce the workload of manual labeling, and improve the labeling quality is a problem which needs to be solved urgently nowadays for the application and development of deep learning in various fields. For the marking of the core sequence images with the interlayer correlation, the correlation is applied to the marking problem, and the method is a feasible technical scheme.
Disclosure of Invention
The invention aims to solve the problems and provide a core sequence image annotation method combining optical flow and watershed segmentation, which utilizes the interlayer correlation of a sequence diagram, reduces the workload and difficulty of annotation and improves the annotation quality.
The invention realizes the purpose through the following technical scheme:
(1) unifying the size of each image in the rock core sequence images, and performing translation registration to maximize the structural similarity value of the current image and the previous frame of image;
(2) setting a plurality of two types of marking points for a first frame of the sequence image obtained in the step (1) to mark a target area and a background area;
(3) according to the set mark points, carrying out mark-based watershed segmentation on the current image to obtain a segmented image;
(4) judging whether the segmented image meets the labeling requirement, if so, continuing to execute the step (5), otherwise, adjusting mark points including adding, deleting and the like, and then starting to execute from the step (3);
(5) judging whether the current frame is the last frame of the sequence image, if so, finishing the labeling of the sequence image, otherwise, tracking the mark points of the current frame meeting the labeling as the feature points of the pyramid layering-based improved LK optical flow method to obtain the mark points of the next frame;
(6) and (6) repeating the steps (3) to (5) according to the mark points obtained in the step (5) to obtain the image marked by each frame.
The basic principle of the method is as follows:
the core sequence images have interlayer correlation, but some images in the sequence images may be slightly displaced due to CT, FIB-SEM and other equipment during imaging. Since such a shift has a great influence on the subsequent use of optical flow tracking feature points, it is first necessary to register the sequence images, i.e., correct the shift between the sequence images. According to the inter-layer correlation of the sequence images, the corrected images should have a structure similar to that of the previous frame, and ssim (structural similarity) structural similarity is used to obtain the best correction displacement, and meanwhile, 400 × 400 areas are cut for uniform size. And then, marking the pore area, wherein the edge of the pore area is difficult to accurately draw by using the existing manual marking software due to the irregular and unsmooth edge of the pore area. The watershed segmentation is an image segmentation algorithm combining topography and region growing ideas, and can automatically and accurately extract edges with obvious difference in gray value change. The watershed segmentation based on the marker firstly marks the image, the target region of interest is marked and extracted, and a marker can mark a meaningful region in the image. And forcibly modifying the minimum value region of the original gradient image by using the marks, shielding irrelevant minimum values in the original gradient image, and finally segmenting the gradient image by using a watershed algorithm. By setting the mark points, the pore area with complex edges can be accurately extracted. And because the core sequence images have interlayer correlation, the gray distribution difference between two adjacent frame images is not large. The marker points are tracked using the modified LK optical flow method based on pyramid layering to get the positions of the marker points in the next frame. Therefore, when the marker-based watershed segmentation is carried out on the next frame of image, the marker points do not need to be selected again, and only a small amount of inappropriate marker points need to be modified, so that the workload is greatly reduced, and the manual marking efficiency is improved.
Specifically, in the step (1):
for the current frame and the previous frame, firstly, a template with a fixed size of 400 x 400 is set, and the part of the previous frame image at the position of the template window is taken as I ', so that I'm,nFor the image in the template after the current frame moves m pixels along the x direction and moves n pixels along the y direction, the structural similarity value of the two is as follows:
Figure BDA0002306117230000041
wherein, muI
Figure BDA0002306117230000042
Is picture I and picture I'm,nMean value of the gray levels of (a)IAnd
Figure BDA0002306117230000043
is picture I and picture I'm,nThe mean square error of (a) is,
Figure BDA0002306117230000044
is picture I and picture I'm,nCovariance of c1And c2Are respectively (k)1L)2And (k)2L)2Wherein k is1,k2Defaults are 0.01 and 0.03, L is the pixel value range, 255 for grayscale images;
then go through m andn is wherein
Figure BDA0002306117230000045
h and w are the height and width, respectively, of the current frame, such that SSIM (I, I'm,n) If the value is maximum, then I 'at that time'm,nNamely the registered image;
in the step (5):
for the selected mark points, the mark points are divided into two types which respectively represent a target area and a background area; pixel points in a neighborhood range with the radius r of the mark point are all set as the same kind of mark points, and the radius r can be manually adjusted;
when the selected mark points are tracked by using an improved LK optical flow method based on pyramid layering, setting the pyramid level to be 3 and the window size to be 15 multiplied by 15; if the distance between the mark point obtained by tracking and the mark point corresponding to the previous frame is greater than the distance between 15 pixel points, discarding the mark point;
for the adjustment mark point, if the mark point causes over-segmentation, deleting the mark point of the area; if the mark points are lacked to cause under-segmentation, adding the mark points of the area; and adding the adjusted mark points to the feature point set again for tracking.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the core sequence image labeling method combining the optical flow and watershed segmentation, provided by the invention, the watershed segmentation based on the mark is used for replacing manual drawing to extract the pore area edge in the core, so that the obtained target area is more accurate and more accords with the real situation. Meanwhile, compared with a mode of manually drawing the edge of the target area, the mode of adding a plurality of mark points is simpler, requires less time and has higher efficiency;
2. according to the rock core sequence image labeling method combining the optical flow and the watershed segmentation, the position of a corresponding mark point of a next frame is obtained by tracking a watershed mark point of a current frame by using an optical flow method, and the watershed segmentation is carried out after a small number of unsuitable mark points are manually adjusted. And the marking point does not need to be selected again for the next frame of image, so that the manual marking time is greatly saved, and the marking efficiency is improved.
In a word, the method disclosed by the invention is easy to exert the advantages of good labeling effect and high efficiency in the field of deep learning which urgently needs a large amount of labeling data.
Drawings
FIG. 1 is an exemplary FIB-SEM sequence chart of the tight carbonate rock of the present example.
Fig. 2-12-2 are an exemplary original image of the FIB-SEM image of the dense carbonate rock of the present embodiment, and a binary image after marking a pore area and a background area, respectively.
Fig. 3 is a flowchart of a core sequence image annotation method combining optical flow and watershed segmentation according to this embodiment.
Fig. 4 is a flowchart of the sequence image registration performed in the present embodiment.
FIGS. 5-15-2 are respectively enlarged views of the XZ observation plane and observation plane of the unregistered sequence chart of the present embodiment, and FIGS. 5-35-4 are respectively enlarged views of the XZ observation plane and observation plane of the sequence chart after registration
Fig. 6-16-2 are the dense carbonate rock map with the added mark and the image obtained after segmentation, respectively.
FIG. 7-1 shows the mark points of the current frame obtained by tracking the mark points of FIG. 6-1 using the LK optical flow method based on pyramid layering in this embodiment.
FIG. 7-2 is a labeled image obtained by performing watershed segmentation using the marker points obtained in FIG. 7-1 according to this embodiment.
Fig. 8-1 shows the marked image obtained after the marking points of fig. 7-1 are manually adjusted according to the embodiment.
FIG. 8-2 is a labeled image obtained by performing watershed segmentation on FIG. 8-1 according to this embodiment.
Fig. 9 illustrates how the LK optical flow method based on pyramid hierarchy tracks the change of the mark points of the adjacent 15 frames according to the present embodiment.
Fig. 10 is a labeled binary image labeled by the method of the present invention on fig. 9 in this embodiment.
FIGS. 11-111-2 are labeled using LabelMe and the method of the present invention, respectively, in this example.
Detailed Description
The invention will be further illustrated with reference to the following specific examples and the accompanying drawings:
in order to make the method of the present invention more understandable and approximate to real application, the present embodiment uses FIB-SEM sequence images of dense carbonate rock, the original size is 1024 × 1024, and since the area occupied by the pore region is smaller, the area containing the pore portion is 400 × 400 cut. FIG. 1 is an exemplary FIB-SEM of dense carbonate rock used in this example. The white solid line circle region is a pore region, that is, a target region to be labeled in this embodiment. The rest is a background area. Fig. 2-1 and 2-2 are diagrams of an original image of an example of the compact carbonate rock of this embodiment, and binary diagrams after labeling a pore region and a background region, where a pixel with a gray value of 255 represents a target region, and a pixel with a gray value of 0 represents a background region.
Fig. 3 is a flowchart of a core sequence image annotation method combining optical flow and watershed segmentation according to this embodiment.
The implementation steps are as follows:
(1) performing translation registration on each image in the core sequence images to enable the structural similarity value of the current image and the previous frame of image to be maximum; the registration process is shown in fig. 4.
For the current frame and the previous frame, firstly, a template with a fixed size of 400 x 400 is set, and the part of the previous frame image at the position of the template window is taken as I ', so that I'm,nFor the current frame, moving m pixels along the x direction, moving n pixels along the y direction, and then obtaining the image in the template, the structural similarity value of the two is:
Figure BDA0002306117230000061
wherein, muI
Figure BDA0002306117230000062
Is picture I and picture I'm,nMean value of the gray levels of (a)IAnd
Figure BDA0002306117230000063
as an imageI and image I'm,nThe mean square error of (a) is,
Figure BDA0002306117230000064
is picture I and picture I'm,nCovariance of c1And c2Are respectively (k)1L)2And (k)2L)2Wherein k is1,k2Defaults are 0.01 and 0.03, L is the pixel value range, 255 for grayscale images;
then the m and the n are traversed,
Figure BDA0002306117230000071
where h and w are the height and width, respectively, of the original image, such that SSIM (I, I'm,n) Value is maximum, then I'm,nNamely the registered image;
(2) and (3) obtaining the registered images through the step (1), wherein the images have the uniform size of 400 x 400.
As shown in fig. 5-15-2, which are respectively enlarged views of the XZ observation plane and the observation plane of the unregistered sequence diagram of the present embodiment, and 5-35-4 which are respectively enlarged views of the XZ observation plane and the observation plane of the sequence diagram after registration. It can be observed that the offset problem can be effectively solved by using the structural similarity.
(3) Setting a plurality of two types of marking points for the first frame of the sequence image obtained in the step (2) to mark a target area and a background area;
as shown in fig. 6-1, where the green open markers represent the background area and the red solid markers represent the target pore area.
(4) According to the set mark points, carrying out mark-based watershed segmentation on the current image to obtain a segmented image;
(5) judging whether the segmented image meets the labeling requirement, if so, continuing to execute the step (6), otherwise, adjusting mark points including adding, deleting and the like, and then starting to execute from the step (4);
as shown in fig. 6-2, the labeled binary image obtained by performing watershed segmentation on the image with the marker obtained in step (3) is observed to satisfy the labeling requirement, and the process proceeds to step (6).
(6) Judging whether the current frame is the last frame of the sequence image, if so, finishing the labeling of the sequence image, otherwise, tracking the mark points of the current frame meeting the labeling as the feature points of the pyramid layering-based improved LK optical flow method to obtain the mark points of the next frame;
for the selected mark points, the mark points are divided into two types which respectively represent a target area and a background area; pixel points in a neighborhood range with the radius r of the mark point are all set as the same kind of mark points, and the radius r can be manually adjusted;
when the marker points are tracked by using an improved LK optical flow method based on pyramid layering, the pyramid level is set to be 3, and the window size is 15 multiplied by 15; if the distance between the mark point obtained by tracking and the mark point corresponding to the previous frame is greater than the distance between 15 pixel points, discarding the mark point;
for the adjustment mark point, if the mark point causes over-segmentation, deleting the mark point of the area; if the mark points are lacked to cause under-segmentation, adding the mark points of the area; and adding the adjusted mark points to the feature point set again for tracking.
As shown in fig. 7-1, a marker image of the current frame is obtained for tracking the markers of fig. 6-1 (i.e., the previous frame) by using the LK optical flow method based on the pyramid hierarchy. Wherein the light lines represent the corresponding marker point movement tracks. The set mark points can be observed to obtain better tracking target and background areas without manual re-selection.
As shown in fig. 7-2, the labeled image is obtained by performing watershed segmentation using the marker points obtained in fig. 7-1. It was observed that, at the upper left of FIG. 7-2, an over-segmentation phenomenon occurred. The actual background area is incorrectly labeled as the pore area. For this phenomenon, we manually adjust the marker, i.e. add the background marker on the upper left, as shown in fig. 7-3, and the white circle identifies the added marker. And (4) performing watershed segmentation again by slightly adjusting the mark points to obtain the image labeled as shown in the figure 7-4, wherein the label is correct at the moment.
(7) And (5) repeating the steps (4) to (6) according to the mark points obtained in the step (6) to obtain the marked image of each frame.
In order to more intuitively represent the situation of tracking the mark points by using the LK optical flow based on the pyramid hierarchy, fig. 9 shows the situation of tracking the change of the mark points of the adjacent 15 frames by using the LK optical flow based on the pyramid hierarchy. Here, we only mark the target area, which is shown as a light pixel area in the figure. Observing fig. 9, the optical flow method can better track the mark points, for the larger pores in the middle part of the graph, the larger pores are gradually divided into two pores along with the increase of the Z direction of the sequence graph, and the set mark points move along with the change trend of the pores under the tracking of the optical flow method and accurately fall in the target pore area. Therefore, the marking points do not need to be added to each sequence diagram again, and only the marking points with tracking errors need to be modified, so that the workload is greatly reduced, and the marking speed is improved. FIG. 10 is a labeled binary image obtained by labeling FIG. 9 by the method of the present invention.
(8) In order to further illustrate the superiority of the technology, the method and LabelMe labeling software are used for labeling the FIB-SEM sequence diagram of the dense carbonate rock, and the comparison analysis is carried out on the aspects of efficiency and accuracy.
As shown in FIGS. 11-1 and 11-2, which are the labeling cases using LabelMe and the method of the present invention, respectively, the darker lines in FIG. 11-2 represent the labeled regions. According to the LabelMe basic principle, a plurality of line segments are used for delineating an area, and the labeling effect of LabelMe is poor for a pore target area with a smooth curve edge.
Regarding the accuracy, for the sequence diagram set in which fig. 11-1 and fig. 11-2 are located, taking continuous 50 frames as the labeling object, on the premise of ensuring the labeling effect. The average labeling time of each LabelMe is 184.74s, the average labeling time of each LabelMe is 25.71s, the efficiency is obviously improved, and the labeling time is greatly saved.
In conclusion, the method provided by the invention has the advantages that the marking efficiency is greatly improved for marking the rock core sequence images, and a good marking effect can be obtained.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the technical solutions of the present invention. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (2)

1. The core sequence image annotation method combining the optical flow and watershed segmentation is characterized by comprising the following steps of: the method comprises the following steps:
(1) unifying the size of each image in the core sequence image, setting a template with a fixed size of 400 x 400, taking the part of the previous frame of image at the position of a template window, calculating a structure Similarity value by using an SSIM (Structural Similarity) formula, and performing translation registration to maximize the structure Similarity value of the current image and the previous frame of image;
(2) setting a plurality of two types of marking points for a first frame of the sequence image obtained in the step (1) to mark a target area and a background area;
(3) according to the set mark points, carrying out mark-based watershed segmentation on the current image to obtain a segmented image;
(4) judging whether the segmented image meets the labeling requirement, if so, continuing to execute the step (5), otherwise, adjusting mark points including adding, deleting and the like, and then starting to execute from the step (3);
(5) judging whether the current frame is the last frame of the sequence image, if so, finishing the labeling of the sequence image, otherwise, tracking the mark points of the current frame meeting the labeling as the feature points of the pyramid-layering-based improved LK optical flow method to obtain the mark points of the next frame; for the selected mark points, the mark points are divided into two types which respectively represent a target area and a background area; pixel points in a neighborhood range with the radius r of the mark point are all set as the same kind of mark points, and the radius r can be manually adjusted; when the marker points are tracked by using an improved LK optical flow method based on pyramid layering, the pyramid level is set to be 3, and the window size is 15 multiplied by 15; if the distance between the mark point obtained by tracking and the mark point corresponding to the previous frame is greater than the distance between 15 pixel points, discarding the mark point; for the adjustment mark point, if the mark point causes over-segmentation, deleting the mark point of the area; if the mark points are lacked to cause under-segmentation, adding the mark points of the area; adding the adjusted mark points into the feature point set again for tracking;
(6) and (5) repeating the steps (3) to (5) according to the mark points obtained in the step (5) to obtain the marked image of each frame.
2. The method for labeling images of a core sequence by combining optical flow and watershed segmentation as claimed in claim 1, wherein the method comprises the following steps:
the specific method of translational registration in the step (1) is as follows:
for the current frame and the previous frame, firstly, a template with a fixed size of 400 x 400 is set, and the part of the previous frame image at the position of the template window is taken as I ', so that I'm,nFor the image in the template after the current frame moves m pixels along the x direction and moves n pixels along the y direction, the structural similarity value of the current frame and the image is obtained by using an SSIM formula as follows:
Figure FDA0003663084490000021
wherein, muI
Figure FDA0003663084490000022
Is picture I and picture I'm,nMean value of the gray levels of (a)IAnd
Figure FDA0003663084490000023
is picture I and picture I'm,nThe mean square error of (a) is,
Figure FDA0003663084490000024
is picture I and picture I'm,nCovariance of c1And c2Are respectively (k)1L)2And (k)2L)2Wherein k is1,k2Defaults are 0.01 and 0.03, L is the pixel value range, 255 for grayscale images; then go through m and n, wherein
Figure FDA0003663084490000025
h and w are the height and width of the current frame, respectively, such that SSIM (I, I'm,n) Value is maximum, then I'm,nI.e. the 400 x 400 region after registration.
CN201911240627.3A 2019-12-06 2019-12-06 Core sequence image annotation method combining optical flow and watershed segmentation Active CN112927238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911240627.3A CN112927238B (en) 2019-12-06 2019-12-06 Core sequence image annotation method combining optical flow and watershed segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911240627.3A CN112927238B (en) 2019-12-06 2019-12-06 Core sequence image annotation method combining optical flow and watershed segmentation

Publications (2)

Publication Number Publication Date
CN112927238A CN112927238A (en) 2021-06-08
CN112927238B true CN112927238B (en) 2022-07-01

Family

ID=76161477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911240627.3A Active CN112927238B (en) 2019-12-06 2019-12-06 Core sequence image annotation method combining optical flow and watershed segmentation

Country Status (1)

Country Link
CN (1) CN112927238B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609708A (en) * 2012-01-15 2012-07-25 北京工业大学 Method for calculating translation vector and rotary parameters of central point of barbell
CN110264484A (en) * 2019-06-27 2019-09-20 上海海洋大学 A kind of improvement island water front segmenting system and dividing method towards remotely-sensed data

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100413327C (en) * 2006-09-14 2008-08-20 浙江大学 A video object mask method based on the profile space and time feature
CN101515366B (en) * 2009-03-30 2010-12-01 西安电子科技大学 Watershed SAR image segmentation method based on complex wavelet extraction mark
US9094606B2 (en) * 2011-07-04 2015-07-28 Waikatolink Limited Motion compensation in range imaging
JP2013196308A (en) * 2012-03-19 2013-09-30 Ricoh Co Ltd Image processor, image processing method, program and recording medium
MX349448B (en) * 2012-08-10 2017-07-28 Ingrain Inc Method for improving the accuracy of rock property values derived from digital images.
CN103116987B (en) * 2013-01-22 2014-10-29 华中科技大学 Traffic flow statistic and violation detection method based on surveillance video processing
US9445713B2 (en) * 2013-09-05 2016-09-20 Cellscope, Inc. Apparatuses and methods for mobile imaging and analysis
US9552530B2 (en) * 2013-11-15 2017-01-24 Samsung Electronics Co., Ltd. Method and system to detect objects in multimedia using non-textural information within segmented region
GB2524810B (en) * 2014-04-03 2016-03-16 Corex Uk Ltd Method of analysing a drill core sample
CN104376556B (en) * 2014-10-31 2017-06-16 四川大学 A kind of rock CT images Target Segmentation method
CN104599291B (en) * 2015-01-21 2017-07-28 内蒙古科技大学 Infrared motion target detection method based on structural similarity and significance analysis
GB2536430B (en) * 2015-03-13 2019-07-17 Imagination Tech Ltd Image noise reduction
EP3398016A4 (en) * 2016-01-03 2019-08-28 HumanEyes Technologies Ltd Adaptive stitching of frames in the process of creating a panoramic frame
CN106204572B (en) * 2016-07-06 2020-12-04 合肥工业大学 Road target depth estimation method based on scene depth mapping
CN108090891B (en) * 2017-11-01 2020-10-30 浙江农林大学 Method and system for detecting missing cell region and newly added cell region

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609708A (en) * 2012-01-15 2012-07-25 北京工业大学 Method for calculating translation vector and rotary parameters of central point of barbell
CN110264484A (en) * 2019-06-27 2019-09-20 上海海洋大学 A kind of improvement island water front segmenting system and dividing method towards remotely-sensed data

Also Published As

Publication number Publication date
CN112927238A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN108827316B (en) Mobile robot visual positioning method based on improved Apriltag
CN110263717B (en) Method for determining land utilization category of street view image
CN107424142B (en) Weld joint identification method based on image significance detection
CN100565559C (en) Image text location method and device based on connected component and support vector machine
CN108363951B (en) Automatic acquisition method of deep learning sample library corresponding to remote sensing image land type identification
CN112633277A (en) Channel ship board detection, positioning and identification method based on deep learning
Chen et al. Shadow-based Building Detection and Segmentation in High-resolution Remote Sensing Image.
WO2007001314A2 (en) Automatically and accurately conflating road vector data, street maps, and orthoimagery
CN107220976B (en) Highway positioning method for aerial highway image
CN112036231B (en) Vehicle-mounted video-based lane line and pavement indication mark detection and identification method
CN102609723B (en) Image classification based method and device for automatically segmenting videos
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN113343976B (en) Anti-highlight interference engineering measurement mark extraction method based on color-edge fusion feature growth
CN114492619A (en) Point cloud data set construction method and device based on statistics and concave-convex property
CN113674216A (en) Subway tunnel disease detection method based on deep learning
CN113269724A (en) Fine-grained cancer subtype classification method
CN112990237B (en) Subway tunnel image leakage detection method based on deep learning
CN110400287B (en) Colorectal cancer IHC staining image tumor invasion edge and center detection system and method
CN115035089A (en) Brain anatomy structure positioning method suitable for two-dimensional brain image data
CN110634142A (en) Complex vehicle road image boundary optimization method
CN112927238B (en) Core sequence image annotation method combining optical flow and watershed segmentation
CN107194405B (en) Interactive semi-automatic high-resolution remote sensing image building extraction method
CN117197459A (en) Weak supervision semantic segmentation method based on saliency map and attention mechanism
Dong et al. Building Extraction from High Spatial Resolution Remote Sensing Images of Complex Scenes by Combining Region-Line Feature Fusion and OCNN
CN116758421A (en) Remote sensing image directed target detection method based on weak supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant