CN108596191A - A kind of simple target extracting method for having weak edge - Google Patents
A kind of simple target extracting method for having weak edge Download PDFInfo
- Publication number
- CN108596191A CN108596191A CN201810368187.9A CN201810368187A CN108596191A CN 108596191 A CN108596191 A CN 108596191A CN 201810368187 A CN201810368187 A CN 201810368187A CN 108596191 A CN108596191 A CN 108596191A
- Authority
- CN
- China
- Prior art keywords
- mark
- image
- foreground
- pixel
- weak edge
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000005728 strengthening Methods 0.000 claims abstract description 7
- 238000010276 construction Methods 0.000 claims description 4
- 238000012804 iterative process Methods 0.000 claims description 4
- 230000003139 buffering effect Effects 0.000 claims description 3
- 235000013350 formula milk Nutrition 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 241000270295 Serpentes Species 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 4
- 230000001575 pathological effect Effects 0.000 abstract description 2
- 238000009499 grossing Methods 0.000 abstract 1
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 235000004443 Ricinus communis Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The present invention relates to a kind of for the simple target extracting method with weak edge.Include the following steps:Step 1, input includes the image of simple target;Step 2, artificial mark;Step 3, smoothing processing;Step 4, gradient calculates;Step 5, weak edge parameters definition;Step 6, extraction mark foreground and mark background;Step 7, training dataset and label sets are created;Step 8, test data set is created;Step 9, training KNN graders;Step 10, test data set is predicted;Step 11, weak edge strengthening set is calculated;Step 12, initial profile is extracted;Step 13, interative computation movable contour model;Step 14, objective contour is exported.Weak edge can be accurately detected, the accurate extraction of pathological target in medical image is can be applied to.
Description
Technical field
It is specifically a kind of to be carried for the simple target with weak edge the present invention relates to a kind of digital image processing field
Take method.
Background technology
Edge is the most basic feature of image.The theory of vision computing of Marr regards the acquisition of edge image as the morning of vision
Stage phase, that is, the starting point of entire vision process.It is past to human visual system's studies have shown that the edge of image is especially important
It is past only just to can recognize that an object with a rough contour line, therefore the edge of image has abundant information.Therefore, image
Edge extraction techniques are always the important link of image procossing and pattern-recognition, and are widely used in numerous areas.
The development process of image processing techniques is made a general survey of, the new theory of edge extraction techniques, new method continue to bring out, such as Edge track
Method, the edge detection operator based on pixel neighborhoods construction, such as common gradient operator, Laplace operators.In recent years herein
There is the image procossings new technology such as mathematical morphology, wavelet analysis, BP neural network again in field, is greatly promoted digitized map
As the development of edge extraction techniques.But from the point of view of the achievement delivered with regard to oneself, these methods there is problems:
(1)Computation complexity is larger, it is difficult to reach real-time processing;
(2)Requirement to data source is stringenter, and for the inapparent object in edge, extraction effect is bad.
Invention content
The present invention provides a kind of for the simple target extracting method with weak edge, with artificial mark and castor
The semiautomatic fashion that wide model is combined retains marginal information, the meter of method to the maximum extent by enhancing weak edge
Calculation amount is small, and output result is reliable.
Technical solution is used by target to realize the present invention:Method includes the following steps:
Step 1:A height of h, width w are inputted, the image I of simple target Obj is included1;
Step 2:To image I1It is manually marked, is labeled inside target Obj, internal tab area A1 is obtained, in mesh
It is labeled outside mark Obj, obtains external tab area A2, image I1Artificial mark postscript is mark image I2;
Step 3:To image I1It is smoothed, obtains smoothed image I3;
Step 4:Calculate smoothed image I3Gradient grad;
Step 5:Define the weak edge parameters wep based on gradient grad, weak edge parameters wep=1/ (1+grad);
Step 6:The internal tab area A1 of extraction, is denoted as mark foreground F, extracts external tab area A2, is denoted as mark background B;
Step 7:Training dataset TrainSet and label sets LableSet, training are created using mark foreground F and mark background B
Data set TrainSet is the matrix of (M+N) × 9, and label sets LableSet is the column vector of (M+N) × 1, and M is mark foreground F
Including pixel quantity, N is the quantity for marking the background B pixels that include, the i-th behavior of training dataset TrainSet
The feature vector FV of ith pixel point (x, y), feature vector FV=[I1(x-1,y-1), I1(x-1,y), I1(x-1,y+1),
I1(x,y-1), I1(x,y), I1(x,y+1), I1((x+1,y-1), I1(x+1,y), I1(x+1, y+1)], I1(x, y) table
Show ith pixel point (x, y) in image I1In gray value, when pixel (x, y) belongs to mark foreground F, label sets
The value of i-th of element of LableSet is 1, when pixel (x, y) belongs to mark background B, the i-th of label sets LableSet
The value of a element is 0;
Step 8:Test data set TestSet is created, specific method is:With image I1In pixel (m, n) and its 8 neighborhood pictures
Feature vector FVtest, FVtest=[I of the vegetarian refreshments value construction pixel (m, n) of totally 9 pixels1(m-1,n-1), I1
(m-1,n), I1(m-1,n+1), I1(m,n-1), I1(m,n), I1(m,n+1), I1((m+1,n-1), I1(m+1,n),
I1(m+1, n+1)], double circulation traversal pixel (m, n) is constituted by 2≤m≤h-1 and 2≤n≤w-1;
Step 9:Using the training dataset TrainSet and label sets LableSet training KNN graders in step 7, mould is obtained
Type M;
Step 10:The test data set TestSet in step 8 is tested with the model M in step 9, predicts test data
Each feature vector belongs to the probability of mark foreground F in collection TestSet, obtains foreground Making by Probability Sets FSet;
Step 11:Consider that the foreground probability of the pixel of weak both sides of edges does not have the spy from 0 to 1 or from 1 to 0 of strong edge
Point, based on the weak edge parameters wep in step 5, converts foreground Making by Probability Sets to provide the accuracy of weak boundary extracting
FSet, obtains weak edge strengthening set WFSet, and specific transformation for mula is:WFSet=wep×(2×(FSet-0.5))2;
Step 12:The profile of mark foreground F in extraction step 7 is as initial profile IniC;
Step 13:Iterations Num is initialized, using movable contour model CM, using weak edge strengthening set WFSet as parameter,
Operation is iterated to the initial profile IniC in step 12, divides iteration buffer area Buf, it is every in iterative process for storing
As a result, the space size of Buf is h × w × Num, the profile Ct obtained after the t times iteration is stored in buffering area Buf for primary extraction
(t), Buf (t) is the two-dimensional array of h × w, when meeting iteration stopping condition Buf (t)=Buf (t-1)=Buf (t-2), iteration
Process stops, and enters step 14;
Step 14:Export objective contour.
The color needs manually marked in the step 2 are selected from red, green and blue, and internal marked area
The color of domain A1 and external tab area A2 cannot be identical.
Mark foreground F and the extracting method of mark background B in the step 6 are:Separation mark image I23 face
Colouring component chooses component identical with the internal color of tab area A1, before merging pixel that its value is 255 as marking
Scape F chooses component identical with the external color of tab area A2, merges the pixel that its value is 255 and be used as mark background
B。
Movable contour model CM in the step 13 can be Snake models or level set.
The beneficial effects of the invention are as follows:Weak edge can be accurately detected, pathological target in medical image is can be applied to
Accurate extraction.
Description of the drawings
Fig. 1 is the overall process flow figure of the present invention.
Specific implementation mode
Detailed description of the present invention specific implementation mode below in conjunction with the accompanying drawings.
In step 101, a height of h, width w are inputted, includes the image I of simple target Obj1。
In step 102, to image I1It is manually marked, is labeled with red inside target Obj, obtain internal mark
Region A1 is noted, is labeled with green outside target Obj, obtains external tab area A2, image I1Manually mark postscript is
Mark image I2。
In step 103, to image I1It is smoothed, obtains smoothed image I3。
In step 104, smoothed image I is calculated3Gradient grad.
In step 105, the weak edge parameters wep based on gradient grad, weak edge parameters wep=1/ (1+grad) are defined.
In step 106, separation mark image I23 color components, obtain red component red, green component green and
Blue component blue chooses red component red identical with the internal color of tab area A1, merges the picture that its value is 255
Vegetarian refreshments chooses green component green identical with the external color of tab area A2, merging its value is as mark foreground F
255 pixel is as mark background B.
In step 107, training dataset TrainSet and label sets are created using mark foreground F and mark background B
LableSet, training dataset TrainSet are the matrix of (M+N) × 9, and label sets LableSet is the column vector of (M+N) × 1,
M is the quantity for the pixel that mark foreground F includes, and N is the quantity for marking the pixel that background B includes, training dataset
The feature vector FV, feature vector FV=[I of the i-th behavior ith pixel point (x, y) of TrainSet1(x-1,y-1), I1(x-1,
y), I1(x-1,y+1), I1(x,y-1), I1(x,y), I1(x,y+1), I1((x+1,y-1), I1(x+1,y), I1(x+
1, y+1)], I1(x, y) indicates ith pixel point (x, y) in image I1In gray value, when pixel (x, y) belongs to mark foreground F
When, the value of i-th of element of label sets LableSet is 1, when pixel (x, y) belongs to mark background B, label sets
The value of i-th of element of LableSet is 0.
In step 108, test data set TestSet is created, specific method is:With image I1In pixel (m, n) and
Feature vector FVtest, FVtest=[I of its 8 neighborhood territory pixel point value construction pixel (m, n) of totally 9 pixels1(m-1,
n-1), I1(m-1,n), I1(m-1,n+1), I1(m,n-1), I1(m,n), I1(m,n+1), I1((m+1,n-1), I1(m
+1,n), I1(m+1, n+1)], double circulation traversal pixel (m, n) is constituted by 2≤m≤h-1 and 2≤n≤w-1.
In step 109, using in step 107 training dataset TrainSet and KNN points of label sets LableSet training
Class device, obtains model M.
In step 110, the test data set TestSet in step 108 is tested with the model M in step 109, in advance
The probability that each feature vector in test data set TestSet belongs to mark foreground F is surveyed, foreground Making by Probability Sets FSet is obtained.
In step 111, consider the foreground probability of the pixel of weak both sides of edges do not have strong edge from 0 to 1 or from 1 to
0 the characteristics of, based on the weak edge parameters wep in step 105, converts foreground probability to provide the accuracy of weak boundary extracting
Set FSet, obtains weak edge strengthening set WFSet, and specific transformation for mula is:WFSet=wep×(2×(FSet-0.5))2。
The profile of mark foreground F in step 112, extraction step 107 is as initial profile IniC.
In step 113, iterations Num is initialized, using level set as movable contour model CM, with weak edge strengthening
Set WFSet is parameter, is iterated operation to the initial profile IniC in step 112, divides iteration buffer area Buf, be used for
The extraction each time in iterative process is stored as a result, the space size of Buf is h × w × Num, the profile obtained after the t times iteration
Ct be stored in buffering area Buf (t), Buf (t) be h × w two-dimensional array, when meet iteration stopping condition Buf (t)=Buf (t-1)=
When Buf (t-2), iterative process stops, and enters step 114.
In step 114, objective contour is exported.
Claims (4)
1. a kind of for the simple target extracting method with weak edge, it is characterised in that include the following steps:
Step 1:A height of h, width w are inputted, the image I of simple target Obj is included1;
Step 2:To image I1It is manually marked, is labeled inside target Obj, internal tab area A1 is obtained, in target
It is labeled outside Obj, obtains external tab area A2, image I1Artificial mark postscript is mark image I2;
Step 3:To image I1It is smoothed, obtains smoothed image I3;
Step 4:Calculate smoothed image I3Gradient grad;
Step 5:Define the weak edge parameters wep based on gradient grad, weak edge parameters wep=1/ (1+grad);
Step 6:The internal tab area A1 of extraction, is denoted as mark foreground F, extracts external tab area A2, is denoted as mark background B;
Step 7:Training dataset TrainSet and label sets LableSet, training are created using mark foreground F and mark background B
Data set TrainSet is the matrix of (M+N) × 9, and label sets LableSet is the column vector of (M+N) × 1, and M is mark foreground F
Including pixel quantity, N is the quantity for marking the background B pixels that include, the i-th behavior of training dataset TrainSet
The feature vector FV of ith pixel point (x, y), feature vector FV=[I1(x-1,y-1), I1(x-1,y), I1(x-1,y+1),
I1(x,y-1), I1(x,y), I1(x,y+1), I1((x+1,y-1), I1(x+1,y), I1(x+1, y+1)], I1(x, y) table
Show ith pixel point (x, y) in image I1In gray value, when pixel (x, y) belongs to mark foreground F, label sets
The value of i-th of element of LableSet is 1, when pixel (x, y) belongs to mark background B, the i-th of label sets LableSet
The value of a element is 0;
Step 8:Test data set TestSet is created, specific method is:With image I1In pixel (m, n) and its 8 neighborhood pictures
Feature vector FVtest, FVtest=[I of the vegetarian refreshments value construction pixel (m, n) of totally 9 pixels1(m-1,n-1), I1
(m-1,n), I1(m-1,n+1), I1(m,n-1), I1(m,n), I1(m,n+1), I1((m+1,n-1), I1(m+1,n),
I1(m+1, n+1)], double circulation traversal pixel (m, n) is constituted by 2≤m≤h-1 and 2≤n≤w-1;
Step 9:Using the training dataset TrainSet and label sets LableSet training KNN graders in step 7, mould is obtained
Type M;
Step 10:The test data set TestSet in step 8 is tested with the model M in step 9, predicts test data
Each feature vector belongs to the probability of mark foreground F in collection TestSet, obtains foreground Making by Probability Sets FSet;
Step 11:Consider that the foreground probability of the pixel of weak both sides of edges does not have the spy from 0 to 1 or from 1 to 0 of strong edge
Point, based on the weak edge parameters wep in step 5, converts foreground Making by Probability Sets to provide the accuracy of weak boundary extracting
FSet, obtains weak edge strengthening set WFSet, and specific transformation for mula is:WFSet=wep×(2×(FSet-0.5))2;
Step 12:The profile of mark foreground F in extraction step 7 is as initial profile IniC;
Step 13:Iterations Num is initialized, using movable contour model CM, using weak edge strengthening set WFSet as parameter,
Operation is iterated to the initial profile IniC in step 12, divides iteration buffer area Buf, it is every in iterative process for storing
As a result, the space size of Buf is h × w × Num, the profile Ct obtained after the t times iteration is stored in buffering area Buf for primary extraction
(t), Buf (t) is the two-dimensional array of h × w, when meeting iteration stopping condition Buf (t)=Buf (t-1)=Buf (t-2), iteration
Process stops, and enters step 14;
Step 14:Export objective contour.
2. according to claim 1 a kind of for the simple target extracting method with weak edge, it is characterised in that step 2
Described in the color manually marked, color needs select from red, green and blue, and inside tab area A1 and outside
The color of tab area A2 cannot be identical.
3. according to claim 1 a kind of for the simple target extracting method with weak edge, it is characterised in that step 6
Described in mark foreground F and the extracting method of mark background B be:Separation mark image I23 color components, choose with it is interior
The identical component of color of portion tab area A1, merge its value be 255 pixel be used as mark foreground F, choose with outside
The identical component of color of tab area A2 merges the pixel that its value is 255 and is used as mark background B.
4. according to claim 1 a kind of for the simple target extracting method with weak edge, it is characterised in that step
Movable contour model CM described in 13 can be Snake models or level set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810368187.9A CN108596191B (en) | 2018-04-23 | 2018-04-23 | Method for extracting single target with weak edge |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810368187.9A CN108596191B (en) | 2018-04-23 | 2018-04-23 | Method for extracting single target with weak edge |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108596191A true CN108596191A (en) | 2018-09-28 |
CN108596191B CN108596191B (en) | 2021-06-29 |
Family
ID=63614059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810368187.9A Expired - Fee Related CN108596191B (en) | 2018-04-23 | 2018-04-23 | Method for extracting single target with weak edge |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596191B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389600A (en) * | 2018-10-29 | 2019-02-26 | 上海鹰瞳医疗科技有限公司 | Eye fundus image normalization method and equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708370A (en) * | 2012-05-17 | 2012-10-03 | 北京交通大学 | Method and device for extracting multi-view angle image foreground target |
KR101484051B1 (en) * | 2013-06-21 | 2015-01-20 | 중앙대학교 산학협력단 | Apparatus and method for preprocessing for CAD system using active contour method |
CN105389821A (en) * | 2015-11-20 | 2016-03-09 | 重庆邮电大学 | Medical image segmentation method based on combination of cloud module and image segmentation |
CN107578416A (en) * | 2017-09-11 | 2018-01-12 | 武汉大学 | It is a kind of by slightly to heart left ventricle's full-automatic partition method of smart cascade deep network |
-
2018
- 2018-04-23 CN CN201810368187.9A patent/CN108596191B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708370A (en) * | 2012-05-17 | 2012-10-03 | 北京交通大学 | Method and device for extracting multi-view angle image foreground target |
KR101484051B1 (en) * | 2013-06-21 | 2015-01-20 | 중앙대학교 산학협력단 | Apparatus and method for preprocessing for CAD system using active contour method |
CN105389821A (en) * | 2015-11-20 | 2016-03-09 | 重庆邮电大学 | Medical image segmentation method based on combination of cloud module and image segmentation |
CN107578416A (en) * | 2017-09-11 | 2018-01-12 | 武汉大学 | It is a kind of by slightly to heart left ventricle's full-automatic partition method of smart cascade deep network |
Non-Patent Citations (3)
Title |
---|
SONG TU , YI SU: "Fast and Accurate Target Detection Based on", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 * |
侍佼: "模糊活动轮廓模型在图像分割与变化检测中的应用", 《中国博士学位论文全文数据库 信息科技辑》 * |
戚世乐,王美清: "自适应分割弱边缘的活动轮廓模型", 《山东大学学报(工学版)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389600A (en) * | 2018-10-29 | 2019-02-26 | 上海鹰瞳医疗科技有限公司 | Eye fundus image normalization method and equipment |
CN109389600B (en) * | 2018-10-29 | 2022-02-08 | 上海鹰瞳医疗科技有限公司 | Method and device for normalizing fundus images |
Also Published As
Publication number | Publication date |
---|---|
CN108596191B (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11379987B2 (en) | Image object segmentation based on temporal information | |
CN108428229B (en) | Lung texture recognition method based on appearance and geometric features extracted by deep neural network | |
CN109087703B (en) | Peritoneal transfer marking method of abdominal cavity CT image based on deep convolutional neural network | |
CN110599537A (en) | Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system | |
Abdollahi et al. | Improving road semantic segmentation using generative adversarial network | |
CN111986099A (en) | Tillage monitoring method and system based on convolutional neural network with residual error correction fused | |
Yin et al. | FD-SSD: An improved SSD object detection algorithm based on feature fusion and dilated convolution | |
CN109859184B (en) | Real-time detection and decision fusion method for continuously scanning breast ultrasound image | |
WO2023060777A1 (en) | Pig body size and weight estimation method based on deep learning | |
CN113240691B (en) | Medical image segmentation method based on U-shaped network | |
CN109978037A (en) | Image processing method, model training method, device and storage medium | |
CN103914699A (en) | Automatic lip gloss image enhancement method based on color space | |
Dal Poz et al. | Dynamic programming approach for semi-automated road extraction from medium-and high-resolution images | |
CN111325750B (en) | Medical image segmentation method based on multi-scale fusion U-shaped chain neural network | |
CN110992366B (en) | Image semantic segmentation method, device and storage medium | |
CN105389821B (en) | It is a kind of that the medical image cutting method being combined is cut based on cloud model and figure | |
CN113689445B (en) | High-resolution remote sensing building extraction method combining semantic segmentation and edge detection | |
WO2021027152A1 (en) | Image synthesis method based on conditional generative adversarial network, and related device | |
CN110084107A (en) | A kind of high-resolution remote sensing image method for extracting roads and device based on improvement MRF | |
Singh et al. | A hybrid approach for information extraction from high resolution satellite imagery | |
CN113643281A (en) | Tongue image segmentation method | |
CN108596191A (en) | A kind of simple target extracting method for having weak edge | |
CN104766068A (en) | Random walk tongue image extraction method based on multi-rule fusion | |
CN116797609A (en) | Global-local feature association fusion lung CT image segmentation method | |
CN114627136B (en) | Tongue image segmentation and alignment method based on feature pyramid network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210629 |
|
CF01 | Termination of patent right due to non-payment of annual fee |